Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 862319

Summary: [lvmetad] 'pvscan --cache' silently drops all metadata
Product: Red Hat Enterprise Linux 6 Reporter: Marian Csontos <mcsontos>
Component: lvm2Assignee: Petr Rockai <prockai>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: unspecified Docs Contact:
Priority: high    
Version: 6.4CC: agk, cmarthal, coughlan, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.02.98-1.el6 Doc Type: Bug Fix
Doc Text:
(this bug was introduced between releases and never appeared in an actual LVM release, RHEL or otherwise)
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-21 08:14:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
pvscan --cache log
none
pvs after pvscan --cache log none

Description Marian Csontos 2012-10-02 15:45:40 UTC
Description of problem:
Running 'pvscan --cache' with no arguments silently drops all metadata.

Version-Release number of selected component (if applicable):
- upstream git commit 886656e4 (nightly build 2.02.98-0.183.el6.x86_64)

How reproducible:
100%

Steps to Reproduce:
0. create some VGs and LVs...
1. Run 'pvscan --cache'
2. Run lvs, vgs, pvs - everything is empty
3. Run 'vgscan --cache'
4. metadata are back
5. Ryn 'pvscan --cache PV'
6. things are correct
  
pvscan --cache -vvvv output:

#libdm-config.c:863       Setting activation/monitoring to 1
#lvmcmdline.c:1088         Processing: pvscan --cache -vvvv
#lvmcmdline.c:1091         O_DIRECT will be used
#libdm-config.c:799       Setting global/locking_type to 1
#libdm-config.c:799       Setting global/wait_for_locks to 1
#locking/locking.c:242       File-based locking selected.
#libdm-config.c:768       Setting global/locking_dir to /var/lock/lvm
#libdm-config.c:863       Setting global/prioritise_write_locks to 1
#locking/file_locking.c:236       Locking /var/lock/lvm/P_global RB
#locking/file_locking.c:141         _do_flock /var/lock/lvm/P_global:aux WB
#locking/file_locking.c:51         _undo_flock /var/lock/lvm/P_global:aux
#locking/file_locking.c:141         _do_flock /var/lock/lvm/P_global RB
#libdm-config.c:768       Setting response to OK
#libdm-config.c:768       Setting response to OK
#locking/file_locking.c:74       Unlocking /var/lock/lvm/P_global
#locking/file_locking.c:51         _undo_flock /var/lock/lvm/P_global

Is fixed 'vgscan --cache' now the preferred and *only* way to rescan all PVs?
Still 'pvscan --cache' should not drop the cache.

Comment 2 Petr Rockai 2012-10-07 20:29:52 UTC
Hmm, I need more info about this. Peter Rajnoha has seen the same thing, but I can't reproduce the problem. In tests, pvscan --cache does drop metadata, but immediately fills it back in from disks. Moreover, vgscan --cache does the same thing as pvscan --cache, so if the former works for you, I am doubly puzzled. Any further details? A sequence of commands to trigger this?

The following test passes just fine for me, and in lvmetad communication transcript I can clearly see everything works as expected (and I can't see any valgrind warnings either):

. lib/test

aux prepare_pvs 2

vgcreate $vg1 $dev1 $dev2
vgs | grep $vg1

pvscan --cache

vgs | grep $vg1

Comment 3 Peter Rajnoha 2012-10-08 09:18:33 UTC
If it helps, the "-d wire" produces this output for "pvscan --cache" and subsequent "pvs" command (shows no PVs, while there should be sda to sdp and VG over it with one test LV there).

[1] rawhide/~ # lvmetad -f -d wire
[D] creating /run/lvm/lvmetad.socket
<- request="hello"
-> response = "OK"
-> protocol = "lvmetad"
-> version = 1
-> 
<- request="pv_clear_all"
<- token="filter:0"
-> response = "token_mismatch"
-> expected = ""
-> received = "filter:0"
-> reason = "token mismatch"
-> 
<- request="token_update"
<- token="update in progress"
-> response = "OK"
-> 
<- request="pv_clear_all"
<- token="update in progress"
-> response = "OK"
-> 
<- request="token_update"
<- token="filter:0"
-> response = "OK"
-> 
<- request="pv_clear_all"
<- token="filter:0"
-> response = "OK"
-> 
<- request="hello"
-> response = "OK"
-> protocol = "lvmetad"
-> version = 1
-> 
<- request="pv_list"
<- token="filter:0"
-> response="OK"
-> 
-> physical_volumes {
-> }
-> 
-> 

On my machine, it's 100% reproducible. Mornfall, ping me for more debug info if needed...

Comment 4 Peter Rajnoha 2012-10-08 09:19:07 UTC
Created attachment 623353 [details]
pvscan --cache log

Comment 5 Peter Rajnoha 2012-10-08 09:19:46 UTC
Created attachment 623354 [details]
pvs after pvscan --cache log

Comment 6 Petr Rockai 2012-10-09 08:32:16 UTC
Fixed upstream now.

Comment 8 Alasdair Kergon 2012-10-09 12:23:09 UTC
=>will be fixed in 2.02.98

Comment 9 Alasdair Kergon 2012-10-09 12:25:50 UTC
was introduced upstream after 2.02.97, so not present in any official upstream releases

Comment 11 Nenad Peric 2013-01-17 16:12:25 UTC
Cannot reproduce it with the newest upstream LVM version:

(10:09:14) [root@r6-node02:~]$ vgcreate /dev/sd{a..f}1
  /dev/sda1: already exists in filesystem
  New volume group name "sda1" is invalid
  Run `vgcreate --help' for more information.
(10:09:24) [root@r6-node02:~]$ vgcreate vg /dev/sd{a..f}1
  No physical volume found in lvmetad cache for /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
  Volume group "vg" successfully created
(10:09:29) [root@r6-node02:~]$ lvcreate -l2 vg
  Logical volume "lvol0" created
(10:09:36) [root@r6-node02:~]$ lvcreate -l2 vg
  Logical volume "lvol1" created
(10:09:37) [root@r6-node02:~]$ lvcreate -l2 vg
  Logical volume "lvol2" created
(10:09:37) [root@r6-node02:~]$ lvcreate -l2 vg
  Logical volume "lvol3" created
(10:09:38) [root@r6-node02:~]$ lvcreate -l2 vg
  Logical volume "lvol4" created
(10:09:38) [root@r6-node02:~]$ pvscan --cache 
(10:09:47) [root@r6-node02:~]$ lvs
  LV      VG       Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao--- 7.54g                                             
  lv_swap VolGroup -wi-ao--- 1.97g                                             
  lvol0   vg       -wi-a---- 8.00m                                             
  lvol1   vg       -wi-a---- 8.00m                                             
  lvol2   vg       -wi-a---- 8.00m                                             
  lvol3   vg       -wi-a---- 8.00m                                             
  lvol4   vg       -wi-a---- 8.00m                                             
(10:09:49) [root@r6-node02:~]$ vgs
  VG       #PV #LV #SN Attr   VSize  VFree 
  VolGroup   1   2   0 wz--n-  9.51g     0 
  vg         6   5   0 wz--n- 59.95g 59.91g

versions installed:

lvm2-2.02.98-8.el6.x86_64
lvm2-libs-2.02.98-8.el6.x86_64
device-mapper-1.02.77-8.el6.x86_64
kernel-2.6.32-354.el6.x86_64

Comment 12 Corey Marthaler 2013-01-17 20:44:34 UTC
I too was unable to reproduce this issue in the latest rpms (w/ or w/o lvmetad running). I've also added a 'pvscan --cache' to all the display REG checks for the snapshot, mirror, and raid REG test suites in order to check for this in the future.


2.6.32-354.el6.x86_64
lvm2-2.02.98-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
lvm2-libs-2.02.98-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
lvm2-cluster-2.02.98-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
device-mapper-libs-1.02.77-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
device-mapper-event-1.02.77-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
device-mapper-event-libs-1.02.77-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013
cmirror-2.02.98-8.el6    BUILT: Wed Jan 16 07:57:25 CST 2013


SCENARIO - [display_snap]
Create a snapshot and then display it a couple ways
Making origin volume
lvcreate -L 300M snapper -n origin
Making snapshot of origin volume
lvcreate -s /dev/snapper/origin -c 128 -n display_snap -L 100M
Update MDA cache (quick reg check for BZ 862319)
Display snapshot using lvdisplay
Display snapshot using lvs
Display snapshot using lvscan
Removing volume snapper/display_snap
Removing origin snapper/origin


SCENARIO (raid1) - [display_raid]
Create a raid and then display it a couple ways
taft-01: lvcreate --type raid1 -m 1 -n display_raid -L 300M --nosync raid_sanity
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Update MDA cache (quick reg check for BZ 862319)
Display raid using lvdisplay
Display raid using lvs
Display raid using lvscan
Display using dmsetup
Deactivating raid display_raid... and removing


SCENARIO - [display_mirror]
Create a mirror and then display it a couple ways
taft-01: lvcreate -m 1 -n display_mirror -L 300M --nosync mirror_sanity
  WARNING: New mirror won't be synchronised. Don't read what you didn't write!
Update MDA cache (quick reg check for BZ 862319)
Display mirror using lvdisplay
Display mirror using lvs
Display mirror using lvscan
Display using dmsetup
Deactivating mirror display_mirror... and removing

Comment 13 errata-xmlrpc 2013-02-21 08:14:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html