Bug 863401
| Summary: | [lvmetad] lvm1 type metadata: pvs cycling after vgscan --cache, vgs not showing the LVM1 type VG | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Marian Csontos <mcsontos> |
| Component: | lvm2 | Assignee: | Petr Rockai <prockai> |
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.4 | CC: | agk, cmarthal, coughlan, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.98-1.el6 | Doc Type: | Bug Fix |
| Doc Text: |
When using LVM1 (legacy) metadata and lvmetad together, LVM commands could run into infinite loops (hang) when invoked. The problem (which was fixed) was a failure in "pvscan --cache" to read part of LVM1 metadata.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-02-21 08:14:20 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
*Updated reproducer*: after vgcreate run `vgscan --cache` The repeating part in `pvs -vvvv` is following: #format1/disk-rep.c:404 Found /dev/vdh1 in VG snapper #device/dev-io.c:577 Closed /dev/vdh1 #device/dev-cache.c:600 unknown device: stat failed: No such file or directory #metadata/metadata.c:3626 <backtrace> #metadata/metadata.c:2771 <backtrace> #format1/format1.c:317 Reading physical volume data /dev/vdh1 from disk #device/dev-io.c:524 Opened /dev/vdh1 RO O_DIRECT #device/dev-io.c:137 /dev/vdh1: block size is 512 bytes Looks like this is caused by missing VG record. To be sure: this happens only *with lvmetad running*. Also: `lvremove` any LV on lvm1 type VG is displaying errors (what again may be related) and `vgscan --cache` is not the way to sort it out. I am happy with "Release note only" solution: lvmetad + lvm1 is not supported. But should this be fixed, please verify other operations as well. Can't reproduce in tests. I have pushed the following test: . lib/test test -e LOCAL_LVMETAD || skip aux prepare_devs 2 pvcreate --metadatatype 1 $dev1 vgscan --cache pvs | grep $dev1 vgcreate --metadatatype 1 $vg1 $dev1 vgs | grep $vg1 pvs | grep $dev1 and it passes both on my machine and in hydra. Any further details you could share to help track this down? (In reply to comment #4) > Can't reproduce in tests. You must run vgscan --cache after vgcreate: (In reply to comment #1) > *Updated reproducer*: after vgcreate run `vgscan --cache` Oh, I see. Fixed in deea86c7f49ea825608826e29b56a005e2c9e747. The current upstream test case is this (it reliably trips the bug): . lib/test test -e LOCAL_LVMETAD || skip aux prepare_devs 2 pvcreate --metadatatype 1 $dev1 vgscan --cache pvs | grep $dev1 vgcreate --metadatatype 1 $vg1 $dev1 vgscan --cache vgs | grep $vg1 pvs | grep $dev1 Fix verified in the latest rpms. 2.6.32-354.el6.x86_64 lvm2-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 lvm2-libs-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 lvm2-cluster-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-event-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-event-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 cmirror-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 [root@qalvm-01 ~]# ps -ef | grep lvmetad root 3771 1 0 17:09 ? 00:00:00 lvmetad root 6572 1926 0 17:22 pts/0 00:00:00 grep lvmetad [root@qalvm-01 ~]# pvcreate --metadatatype 1 /dev/vdb1 Physical volume "/dev/vdb1" successfully created [root@qalvm-01 ~]# vgscan --cache Reading all physical volumes. This may take a while... Found volume group "vg_qalvm01" using metadata type lvm2 [root@qalvm-01 ~]# pvs | grep /dev/vdb1 /dev/vdb1 lvm1 a-- 10.00g 10.00g [root@qalvm-01 ~]# vgcreate --metadatatype 1 VG /dev/vdb1 Volume group "VG" successfully created [root@qalvm-01 ~]# vgscan --cache Reading all physical volumes. This may take a while... Found volume group "VG" using metadata type lvm1 Found volume group "vg_qalvm01" using metadata type lvm2 [root@qalvm-01 ~]# vgs | grep VG VG #PV #LV #SN Attr VSize VFree VG 1 0 0 wz--n- 9.99g 9.99g [root@qalvm-01 ~]# pvs | grep /dev/vdb1 /dev/vdb1 VG lvm1 a-- 9.99g 9.99g Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html |
Description of problem: when creating lvm1 type PV + VG, pvs enters infinite loop after vgscan --cache. Version-Release number of selected component (if applicable): upstream lvm2 How reproducible: 100% Steps to Reproduce: PV1=/dev/PV1 VG=vg pvcreate --metadatatype 1 $PV1 vgscan --cache pvs # OK vgcreate --metadatatype 1 $VG $PV1 vgs # LVM1 type VG is not listed pvs # ERROR Actual results: vgs not showing $VG infinite loop after pvs Expected results: vgs should list $VG pvs should list PVs Additional info: