Bug 862319
| Summary: | [lvmetad] 'pvscan --cache' silently drops all metadata | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Marian Csontos <mcsontos> | ||||||
| Component: | lvm2 | Assignee: | Petr Rockai <prockai> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | ||||||
| Severity: | unspecified | Docs Contact: | |||||||
| Priority: | high | ||||||||
| Version: | 6.4 | CC: | agk, cmarthal, coughlan, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac | ||||||
| Target Milestone: | rc | ||||||||
| Target Release: | --- | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | lvm2-2.02.98-1.el6 | Doc Type: | Bug Fix | ||||||
| Doc Text: |
(this bug was introduced between releases and never appeared in an actual LVM release, RHEL or otherwise)
|
Story Points: | --- | ||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2013-02-21 08:14:16 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Marian Csontos
2012-10-02 15:45:40 UTC
Hmm, I need more info about this. Peter Rajnoha has seen the same thing, but I can't reproduce the problem. In tests, pvscan --cache does drop metadata, but immediately fills it back in from disks. Moreover, vgscan --cache does the same thing as pvscan --cache, so if the former works for you, I am doubly puzzled. Any further details? A sequence of commands to trigger this? The following test passes just fine for me, and in lvmetad communication transcript I can clearly see everything works as expected (and I can't see any valgrind warnings either): . lib/test aux prepare_pvs 2 vgcreate $vg1 $dev1 $dev2 vgs | grep $vg1 pvscan --cache vgs | grep $vg1 If it helps, the "-d wire" produces this output for "pvscan --cache" and subsequent "pvs" command (shows no PVs, while there should be sda to sdp and VG over it with one test LV there).
[1] rawhide/~ # lvmetad -f -d wire
[D] creating /run/lvm/lvmetad.socket
<- request="hello"
-> response = "OK"
-> protocol = "lvmetad"
-> version = 1
->
<- request="pv_clear_all"
<- token="filter:0"
-> response = "token_mismatch"
-> expected = ""
-> received = "filter:0"
-> reason = "token mismatch"
->
<- request="token_update"
<- token="update in progress"
-> response = "OK"
->
<- request="pv_clear_all"
<- token="update in progress"
-> response = "OK"
->
<- request="token_update"
<- token="filter:0"
-> response = "OK"
->
<- request="pv_clear_all"
<- token="filter:0"
-> response = "OK"
->
<- request="hello"
-> response = "OK"
-> protocol = "lvmetad"
-> version = 1
->
<- request="pv_list"
<- token="filter:0"
-> response="OK"
->
-> physical_volumes {
-> }
->
->
On my machine, it's 100% reproducible. Mornfall, ping me for more debug info if needed...
Created attachment 623353 [details]
pvscan --cache log
Created attachment 623354 [details]
pvs after pvscan --cache log
Fixed upstream now. =>will be fixed in 2.02.98 was introduced upstream after 2.02.97, so not present in any official upstream releases Cannot reproduce it with the newest upstream LVM version:
(10:09:14) [root@r6-node02:~]$ vgcreate /dev/sd{a..f}1
/dev/sda1: already exists in filesystem
New volume group name "sda1" is invalid
Run `vgcreate --help' for more information.
(10:09:24) [root@r6-node02:~]$ vgcreate vg /dev/sd{a..f}1
No physical volume found in lvmetad cache for /dev/sdc1
Physical volume "/dev/sdc1" successfully created
Volume group "vg" successfully created
(10:09:29) [root@r6-node02:~]$ lvcreate -l2 vg
Logical volume "lvol0" created
(10:09:36) [root@r6-node02:~]$ lvcreate -l2 vg
Logical volume "lvol1" created
(10:09:37) [root@r6-node02:~]$ lvcreate -l2 vg
Logical volume "lvol2" created
(10:09:37) [root@r6-node02:~]$ lvcreate -l2 vg
Logical volume "lvol3" created
(10:09:38) [root@r6-node02:~]$ lvcreate -l2 vg
Logical volume "lvol4" created
(10:09:38) [root@r6-node02:~]$ pvscan --cache
(10:09:47) [root@r6-node02:~]$ lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao--- 7.54g
lv_swap VolGroup -wi-ao--- 1.97g
lvol0 vg -wi-a---- 8.00m
lvol1 vg -wi-a---- 8.00m
lvol2 vg -wi-a---- 8.00m
lvol3 vg -wi-a---- 8.00m
lvol4 vg -wi-a---- 8.00m
(10:09:49) [root@r6-node02:~]$ vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 9.51g 0
vg 6 5 0 wz--n- 59.95g 59.91g
versions installed:
lvm2-2.02.98-8.el6.x86_64
lvm2-libs-2.02.98-8.el6.x86_64
device-mapper-1.02.77-8.el6.x86_64
kernel-2.6.32-354.el6.x86_64
I too was unable to reproduce this issue in the latest rpms (w/ or w/o lvmetad running). I've also added a 'pvscan --cache' to all the display REG checks for the snapshot, mirror, and raid REG test suites in order to check for this in the future. 2.6.32-354.el6.x86_64 lvm2-2.02.98-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 lvm2-libs-2.02.98-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 lvm2-cluster-2.02.98-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 device-mapper-libs-1.02.77-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 device-mapper-event-1.02.77-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 device-mapper-event-libs-1.02.77-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 cmirror-2.02.98-8.el6 BUILT: Wed Jan 16 07:57:25 CST 2013 SCENARIO - [display_snap] Create a snapshot and then display it a couple ways Making origin volume lvcreate -L 300M snapper -n origin Making snapshot of origin volume lvcreate -s /dev/snapper/origin -c 128 -n display_snap -L 100M Update MDA cache (quick reg check for BZ 862319) Display snapshot using lvdisplay Display snapshot using lvs Display snapshot using lvscan Removing volume snapper/display_snap Removing origin snapper/origin SCENARIO (raid1) - [display_raid] Create a raid and then display it a couple ways taft-01: lvcreate --type raid1 -m 1 -n display_raid -L 300M --nosync raid_sanity WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Update MDA cache (quick reg check for BZ 862319) Display raid using lvdisplay Display raid using lvs Display raid using lvscan Display using dmsetup Deactivating raid display_raid... and removing SCENARIO - [display_mirror] Create a mirror and then display it a couple ways taft-01: lvcreate -m 1 -n display_mirror -L 300M --nosync mirror_sanity WARNING: New mirror won't be synchronised. Don't read what you didn't write! Update MDA cache (quick reg check for BZ 862319) Display mirror using lvdisplay Display mirror using lvs Display mirror using lvscan Display using dmsetup Deactivating mirror display_mirror... and removing Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html |