RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 837599 - [lvmetad] vgscan --cache does not flush information in lvmetad first
Summary: [lvmetad] vgscan --cache does not flush information in lvmetad first
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Petr Rockai
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-07-04 10:51 UTC by Marian Csontos
Modified: 2013-02-21 08:11 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
Issuing vgscan --cache (to refresh lvmetad) did not remove data about PVs or VGs that no longer exist, only updating metadata of existing entities. This has been fixed, and vgscan --cache will remove any metadata that is no longer relevant.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:11:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Marian Csontos 2012-07-04 10:51:19 UTC
Description of problem:
Running `vgscan --cache` does not remove metadata from lvmetad before scanning. To reread metadata one has to restart lvmetad or to call `pvscan --cache`.

Version-Release number of selected component (if applicable):
lvm2-2.02.96-10.el6

How reproducible:
100%

Steps to Reproduce:

# running lvmetad is assumed
PV=/dev/loop0
pvcreate $PV
vgcreate vgcorrupted $PV
lvcreate -n lvcorrupted -L 10M vgcorrupted
vgscan --cache
# corrupt MDA:
dd if=/dev/zero of=$PV count=16000
vgscan --cache

Actual results:
The VG is not listed on the second attempt, but PV, VG and LV are still listed in `pvs`, `vgs` and `lvs` output.
Writing to such corrupted MDA will correctly fail.
Only `pvscan --cache` will remove them;
dm device is left behind - run `dmsetup remove vgcorrupted-lvcorrupted` to clean up.

Expected results:
Error?

Additional info:

Comment 2 Peter Rajnoha 2012-07-04 11:24:31 UTC
There's also a problem with pvscan (probably the same or very related to the problem already reported here):

If, for some reason (I'm still investigating why), I end up with:

[1] alatyr/~ # pvs
  No device found for PV OAB20Z-HCz7-gxbJ-0vX1-cJV6-uVvu-lxQe2r.
  No device found for PV gct2EQ-gbTi-WPnl-4Eh1-YuP0-51KX-PL69FO.
  No device found for PV rSVXb0-9i7G-8Ra3-60N8-40bh-9P3I-7r6OuD.
  No device found for PV pf6EM0-nlZB-yXgl-7JSH-0nct-1Xby-kmdrha.
  No device found for PV i2BcEj-ILeH-7ZeW-jzbb-5tFw-uFn0-StlnE1.
  No device found for PV XaFv1W-PI0X-q3Da-xP7v-EHfx-TEYp-TBKJW1.
  No device found for PV 0cq5aT-ZHCi-kQse-0Z8z-omSE-cf2d-5mvdBk.
  No device found for PV ULblbL-AS9N-T5xj-hcuL-tfsX-9NNE-WBjMDG.
  No device found for PV ZuHtl3-9HIa-ZojI-rn2R-kleG-XHnB-Nha8QJ.
  No device found for PV ACGdB0-lB1R-qeUw-pBrk-NeHa-f9kq-bRLID0.
  No device found for PV iSvMu3-TtV6-aLwV-cYoE-HL9J-ik5q-jOlrVI.
  No device found for PV tyfM1X-AMtO-nKP0-OjJi-e8zQ-p0mn-Q0pp68.
  No device found for PV C4gXfA-shPV-WpBX-sUUp-GdKc-z2st-FoB2bW.
  No device found for PV ue5OmP-ctJ1-1NrV-CuLg-nDdM-FhZK-RXOOdR.
  No device found for PV Tm5fL1-xcTg-PA3r-R3xw-yOVU-BfUy-hw1TPG.
  No device found for PV fJanDp-3D3c-xhqS-mUVe-ND8e-b7OF-aBRMwI.
  No device found for PV C3KaCT-9xxj-Pddf-HNql-Rqff-9lQg-Zdk0B0.
  No device found for PV GRqDtx-NVvk-CveU-TbY0-6how-USq7-80M2DT.
  No device found for PV cs3aNN-LhF2-Qe0p-3oIJ-ZbRY-OG3e-woccJ3.
  No device found for PV 8hHsHu-EgoK-00x1-CzVe-vd8K-J0jy-d0AYQV.
  No device found for PV QoE3k2-XSjp-8kvp-ZJoi-qhVx-Hv62-Wspn6R.
  No device found for PV u6WXUh-F6wh-luNz-uAN1-AV7u-JKIP-KRY02k.
  No device found for PV yDytGb-UjOH-byIH-yOts-QFqC-uLps-9921dS.
  No device found for PV j0C2sX-4a6R-lFcA-66bA-hg3W-STc2-9nOzIl.
  No device found for PV XL5a6z-9FJe-jECf-cHdI-RPXb-A02l-YJxuFl.
  No device found for PV Gf4jOE-dcIF-PQSW-MP95-vTvB-TRXM-gqd5H9.
  No device found for PV lsfnKk-4TCe-g9MF-Cdo8-td0q-rY3D-nzGpTM.
  No device found for PV 5bZSuj-0Mvo-a86k-H2io-tkDH-TULc-LnI9O6.
  No device found for PV f52muY-i8Td-1T6y-bbtQ-216j-QXYO-NKfVfM.
  No device found for PV r9oU3P-vTgK-OelU-8aqn-Fhhb-5yrd-7HHWzr.
  No device found for PV cbU1Pb-3Nsl-uAMw-Tjqq-mizd-RqWT-ip6RIt.
  No device found for PV 3bWaWD-EJTg-HKhX-69W5-Bacf-BLmI-hCN1WK.
  No device found for PV Tq0YzT-OlyZ-iaHw-02dd-4Xgt-gVNK-4kBdH2.
  No device found for PV I1QBQr-Eu3D-L2e6-zFTJ-psbg-1O49-cTJHJM.
  No device found for PV jTGYfV-W26N-cV8H-H0lj-yA7k-vZ9r-Le84ck.
  No device found for PV 2qDPjb-dsC2-zk8F-yrtY-GFSk-ZfqR-OBs0kz.
  No device found for PV 87efLc-j2vd-vr6V-mimS-QrBZ-iuMg-P7A58e.
  No device found for PV vI0WP2-b1Um-ptGq-83CD-2bBR-JpW8-cm4icz.
  No device found for PV SQlkg0-ksqe-qDtx-zzFg-y2eW-ycFb-rOXCXS.
  No device found for PV HOYa0A-AFpF-Y4Jf-giFo-ESsF-XJHS-r0VmCq.
  No device found for PV hPJIJb-vGqW-iQ3H-vAl8-2ozs-HNvw-5fOBf6.
  No device found for PV 4o7CI8-SnZk-wPF0-53gN-PCpJ-COns-4MM9wf.
  No device found for PV 1d8744-izLP-L1Mw-uU1c-9rDj-Wu9J-3bQGtM.
  No device found for PV N3Gj7m-ZHeP-5W1R-B07p-lcGa-7luM-v1wmDS.
  No device found for PV QoBCmw-mlpu-etqi-L2ke-j7DF-CziG-NkePpY.
  No device found for PV R262xZ-kzAA-U03e-cWB8-qInm-L8H3-cTknVP.
  No device found for PV l33QvS-qp24-Meo0-sq8F-KQNF-XkP2-WXdkGL.
  No device found for PV UN5RTq-yXPf-8qhY-xSwF-QOPn-FWQ0-UiLPLH.
  No device found for PV s4AOwi-Tu24-wsiH-y7Yp-3H4U-kG4N-bcD0Sz.
  No device found for PV Ba2Xe2-1o0B-4mzu-77uN-xftD-iCOx-R7B6dE.
  No device found for PV PBQppR-6xc8-eAzB-I4n5-hXrM-vWev-xwz5G9.
  No device found for PV QUHwNG-1c7p-cr5N-rGUo-d1aw-NfXU-WkKPiI.
  No device found for PV nbgJGa-6sH7-STwv-rSg2-FAHg-nBb6-XVpBoR.
  No device found for PV EEO9ia-6W9P-mBLP-49fy-TVrf-mJV6-OsZwx6.
  No device found for PV oTDFd7-0xGj-vvQO-UiPu-Lg4Q-mMQh-ZHqliD.
  No device found for PV s6uTev-cku5-Jeme-2PGA-YWmL-an9G-dKY2bI.
  No device found for PV uI29YQ-RVlk-c2w0-3DLW-smiA-klOG-S49MPe.
  No device found for PV lJTPOl-rSe3-bPe7-OmLs-ICID-eWf7-kDDpOL.
  No device found for PV NcZ2pN-qAoA-o0la-Vmhi-2Oez-tR0H-D7gniw.
  No device found for PV UY6yZd-Yity-ZiMM-vSTO-2uT8-sa4h-fdBx9G.
  No device found for PV cvGZaL-XA1K-1TVs-e9A9-kiIf-6HC1-wkOh7J.
  No device found for PV 1OIJrt-YcLY-g0d1-s3ut-By1u-8HsI-IHS1aY.
  No device found for PV CnRROT-wJTc-Xfep-UMzw-mGf3-1F9j-heeW4s.
  PV                                                    VG        Fmt  Attr PSize   PFree 
  /dev/mapper/luks-58353f0b-7f26-4b86-a087-59dfb1b1d6db vg_alatyr lvm2 a--   99.97g     0 
  /dev/sda3                                             vg_data   lvm2 a--  365.26g 50.13g

[1] alatyr/~ # pvscan --cache
  No PV label found on /dev/sda1.
  No PV label found on /dev/vg_alatyr/lv_root.
  No PV label found on /dev/sda2.
  No PV label found on /dev/vg_alatyr/lv_swap.
  No PV label found on /dev/vg_alatyr/lv_home.

[1] alatyr/~ # pvs
  No device found for PV OAB20Z-HCz7-gxbJ-0vX1-cJV6-uVvu-lxQe2r.
  No device found for PV gct2EQ-gbTi-WPnl-4Eh1-YuP0-51KX-PL69FO.
  No device found for PV rSVXb0-9i7G-8Ra3-60N8-40bh-9P3I-7r6OuD.
  etc...

The "pvscan --cache" should have flushed all incorrect information and the "pvs" called afterwards should have clean info...

Comment 3 Peter Rajnoha 2012-07-04 11:34:41 UTC
An excerpt from the -vvvv log of the pvs command - the regex is correctly skipping the device, however, lvmetad code is still trying to access it:

#filters/filter-regex.c:173         /dev/vg_data/rhel6_stg_shared_01: Skipping (regex)
#cache/lvmetad.c:124   No device found for PV gct2EQ-gbTi-WPnl-4Eh1-YuP0-51KX-PL69FO.
#libdm-config.c:758       Setting id to rSVXb0-9i7G-8Ra3-60N8-40bh-9P3I-7r6OuD
#libdm-config.c:758       Setting format to lvm2
#libdm-config.c:789       Setting device to 64821
#libdm-config.c:789       Setting dev_size to 134217728
#libdm-config.c:789       Setting label_sector to 1

Comment 4 Peter Rajnoha 2012-07-04 12:32:35 UTC
So two issues here actually:

1.
"pvscan --cache" uses filter, but "pvscan --cache <device>" *does not* use filtering! But if any block device appears on a system (including device-mapper devices - LVs, in my case it was an LV used for a guest where inside this LV a PV was defined, but only to be visible for a guest, not for the host - so I set that in host's lvm.conf filter), the "pvscan --cache <device>" is called from within 69-dm-lvmetad.rules by default on *all block devices that are marked as PVs* (the scan whether this is a PV or not is done by blkid).

So "pvscan --cache <device>" should probably use filters as well!

There's probably a counterargument that if one uses the <device> directly on the command line, we don't need to filter as this is user's choice to directly scan the device... but we call "pvscan --cache <device>" in udev rules and we'd need to read the lvm.conf and do the filtering outside the pvscan --cache call otherwise and that would not be practical nor effective (this would require another call in udev rules, just to read the lvm.conf filters.

2.
The vgscan/pvscan --cache issue and the old information not being flushed from lvmetad if it is detected to be stale within the vgscan/pvscan --cache call.

Comment 5 Petr Rockai 2012-09-10 14:18:10 UTC
In fact, vgscan --cache needs to be dropped (or made an alias to pvscan --cache). The code is bogus. The filter issues are tracked in 814782 (should be ready to merge). I'll see to both.

Comment 6 Petr Rockai 2012-09-26 18:44:01 UTC
Fixed upstream.

Comment 8 Nenad Peric 2013-01-22 14:23:14 UTC
Testing using the reported instructions:

(08:20:21) [root@r6-node01:~]$ pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
(08:20:25) [root@r6-node01:~]$ vgcreate corrupted /dev/sdc1
  Volume group "corrupted" successfully created
(08:20:34) [root@r6-node01:~]$ lvcreate -n lvcorr corrupted -l 9
  Logical volume "lvcorr" created
(08:20:46) [root@r6-node01:~]$ vgscan --cache
  Reading all physical volumes.  This may take a while...
  Found volume group "corrupted" using metadata type lvm2
  Found volume group "VolGroup" using metadata type lvm2
(08:20:52) [root@r6-node01:~]$ lvs
  LV      VG        Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao---  7.54g                                             
  lv_swap VolGroup  -wi-ao---  1.97g                                             
  lvcorr  corrupted -wi-a---- 36.00m                                             
(08:20:55) [root@r6-node01:~]$ pvs
  PV         VG        Fmt  Attr PSize  PFree 
  /dev/sdb1            lvm2 a--  10.00g 10.00g
  /dev/sdc1  corrupted lvm2 a--   9.99g  9.96g
  /dev/sdd1            lvm2 a--  10.00g 10.00g
  /dev/sdf1            lvm2 a--  10.00g 10.00g
  /dev/vda2  VolGroup  lvm2 a--   9.51g     0 
(08:20:55) [root@r6-node01:~]$ dd if=/dev/urandom of=/dev/sdc1 bs=1024 count=10240
10240+0 records in
10240+0 records out
10485760 bytes (10 MB) copied, 1.27014 s, 8.3 MB/s
(08:21:10) [root@r6-node01:~]$ vgscan --cache
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup" using metadata type lvm2
(08:21:13) [root@r6-node01:~]$ lvs
  LV      VG       Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao--- 7.54g                                             
  lv_swap VolGroup -wi-ao--- 1.97g                                             
(08:21:16) [root@r6-node01:~]$ vgs
  VG       #PV #LV #SN Attr   VSize VFree
  VolGroup   1   2   0 wz--n- 9.51g    0 
(08:21:17) [root@r6-node01:~]$ pvs
  PV         VG       Fmt  Attr PSize  PFree 
  /dev/sdb1           lvm2 a--  10.00g 10.00g
  /dev/sdd1           lvm2 a--  10.00g 10.00g
  /dev/sdf1           lvm2 a--  10.00g 10.00g
  /dev/vda2  VolGroup lvm2 a--   9.51g     0 
(08:21:18) [root@r6-node01:~]$ dmsetup ls
VolGroup-lv_swap	(253:1)
VolGroup-lv_root	(253:0)
corrupted-lvcorr	(253:2)
(08:21:21) [root@r6-node01:~]$ dmsetup remove corrupted-lvcorr
(08:21:28) [root@r6-node01:~]$ 


The cleanup of a leftover LV has to be done with dmsetup in the end as described in first comment as well.


Verified with:

lvm2-2.02.98-8.el6.x86_64
lvm2-libs-2.02.98-8.el6.x86_64
device-mapper-1.02.77-8.el6.x86_64
udev-147-2.46.el6.x86_64
kernel-2.6.32-355.el6.x86_64

Comment 9 errata-xmlrpc 2013-02-21 08:11:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.