RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1741276 - what is proper way to bring back/restore/correct a VG whos PV has gone missing and returned "WARNING: ignoring metadata seqno 3 on /dev/sdp1"
Summary: what is proper way to bring back/restore/correct a VG whos PV has gone missin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 8.0
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-14 16:23 UTC by Corey Marthaler
Modified: 2021-09-07 11:49 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.03.11-0.2.20201103git8801a86.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:01:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2019-08-14 16:23:00 UTC
Description of problem:

When a device/PV goes missing and/or also comes back, usually lvm will provides a hint about what cmd is required to restore the VG/PV back to its original or proper state. In 8.1 now it appears the device is just lost to some internal LVM filter for all eternity.



hayes-03: pvcreate  /dev/sde1 /dev/sdd1 /dev/sdl1 /dev/sdj1 /dev/sdf1 /dev/sdo1 /dev/sdp1 /dev/sdi1 /dev/sdg1 /dev/sdh1
hayes-03: vgcreate   raid_sanity /dev/sde1 /dev/sdd1 /dev/sdl1 /dev/sdj1 /dev/sdf1 /dev/sdo1 /dev/sdp1 /dev/sdi1 /dev/sdg1 /dev/sdh1

============================================================
Iteration 1 of 2 started at Wed Aug 14 11:11:31 CDT 2019
============================================================
SCENARIO (raid1) - [degraded_upconversion_attempt]
Create a raid, fail one of the legs to enter a degraded state, and then attmept an upconversion
lvcreate  --type raid1 -m 1 -n degraded_upconvert -L 100M raid_sanity /dev/sdd1 /dev/sdp1

secondary fail=/dev/sdp1
Disabling device sdp on hayes-03rescan device...

Verifying this VG is now in an "Inconsistent" state due to the missing PV
pvs /dev/sdp1
  /dev/sdp: open failed: No such device or address
  /dev/sdp: open failed: No such device or address
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  Error reading device /dev/sdp1 at 0 length 4096.
  /dev/sdp: open failed: No such device or address
  Error reading device /dev/sdp1 at 0 length 4096.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: VG raid_sanity is missing PV FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rimage_1 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rmeta_1 while checking used and assumed devices.
  /dev/sdp: open failed: No such device or address
  /dev/sdp: open failed: No such device or address
  Error reading device /dev/sdp1 at 0 length 4096.
  Failed to find device for physical volume "/dev/sdp1".
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: VG raid_sanity is missing PV FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rimage_1 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rmeta_1 while checking used and assumed devices.
  /dev/sdp: open failed: No such device or address
  /dev/sdp: open failed: No such device or address
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: VG raid_sanity is missing PV FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rimage_1 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rmeta_1 while checking used and assumed devices.

VG reduce to removing failed device and put into degraded raid mode (vgreduce --removemissing -f raid_sanity)
  /dev/sdp: open failed: No such device or address
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: VG raid_sanity is missing PV FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rimage_1 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV raid_sanity/degraded_upconvert_rmeta_1 while checking used and assumed devices.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  WARNING: Couldn't find device with uuid FqSTr1-epOY-qTOB-vsL0-9Myt-GLkf-uTVC3q.
  /dev/sdp: open failed: No such device or address
  /dev/sdp1: open failed: No such device or address
  /dev/sdp: open failed: No such device or address
  /dev/sdp1: open failed: No such device or address

Enabling device sdp on hayes-03 Running vgs to make LVM update metadata version if possible (will restore a-m PVs)
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.

Restoring the VG back to its original state
vgextend raid_sanity /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  Device /dev/sdp1 excluded by a filter.
unable to vgextend VG



### Attempting the normal ways I usually bring back a failed and reappeared device.

[root@hayes-03 ~]# pvscan
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  PV /dev/sde1   VG raid_sanity     lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [446.62 GiB / <446.52 GiB free]
  PV /dev/sdl1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 9 [13.60 TiB] / in use: 9 [13.60 TiB] / in no VG: 0 [0   ]
[root@hayes-03 ~]# lvs -a -o +devices
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  LV                            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                      
  degraded_upconvert            raid_sanity rwi-a-r-r- 100.00m                                    100.00           degraded_upconvert_rimage_0(0),degraded_upconvert_rimage_1(0)
  [degraded_upconvert_rimage_0] raid_sanity iwi-aor--- 100.00m                                                     /dev/sdd1(1)                                                 
  [degraded_upconvert_rimage_1] raid_sanity vwi-aor-r- 100.00m                                                                                                                  
  [degraded_upconvert_rmeta_0]  raid_sanity ewi-aor---   4.00m                                                     /dev/sdd1(0)                                                 
  [degraded_upconvert_rmeta_1]  raid_sanity ewi-aor-r-   4.00m                                                                                                                  
[root@hayes-03 ~]# pvscan
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  PV /dev/sde1   VG raid_sanity     lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [446.62 GiB / <446.52 GiB free]
  PV /dev/sdl1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 9 [13.60 TiB] / in use: 9 [13.60 TiB] / in no VG: 0 [0   ]
[root@hayes-03 ~]# pvremove -f /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  Device /dev/sdp1 excluded by a filter.
[root@hayes-03 ~]# pvremove -ff /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  Device /dev/sdp1 excluded by a filter.
[root@hayes-03 ~]# pvcreate -ff /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  Device /dev/sdp1 excluded by a filter.
[root@hayes-03 ~]# vgreduce --removemissing --force raid_sanity
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  Volume group "raid_sanity" is already consistent.
[root@hayes-03 ~]# lvs -a -o +devices
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.
  LV                            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                      
  degraded_upconvert            raid_sanity rwi-a-r-r- 100.00m                                    100.00           degraded_upconvert_rimage_0(0),degraded_upconvert_rimage_1(0)
  [degraded_upconvert_rimage_0] raid_sanity iwi-aor--- 100.00m                                                     /dev/sdd1(1)                                                 
  [degraded_upconvert_rimage_1] raid_sanity vwi-aor-r- 100.00m                                                                                                                  
  [degraded_upconvert_rmeta_0]  raid_sanity ewi-aor---   4.00m                                                     /dev/sdd1(0)                                                 
  [degraded_upconvert_rmeta_1]  raid_sanity ewi-aor-r-   4.00m                                                                                                                  
[root@hayes-03 ~]# vgcfgrestore --force raid_sanity
  Volume group raid_sanity has active volume: degraded_upconvert_rimage_1.
  Volume group raid_sanity has active volume: degraded_upconvert_rimage_0.
  Volume group raid_sanity has active volume: degraded_upconvert_rmeta_1.
  Volume group raid_sanity has active volume: degraded_upconvert_rmeta_0.
  Volume group raid_sanity has active volume: degraded_upconvert.
  WARNING: Found 5 active volume(s) in volume group "raid_sanity".
  Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group "raid_sanity", while 5 volume(s) are active? [y/n]: y
  Restored volume group raid_sanity.
[root@hayes-03 ~]# pvscan
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 6 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 6.
  PV /dev/sde1   VG raid_sanity     lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [446.62 GiB / <446.52 GiB free]
  PV /dev/sdl1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 9 [13.60 TiB] / in use: 9 [13.60 TiB] / in no VG: 0 [0   ]
[root@hayes-03 ~]# lvs -a -o +devices
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 6 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 6.
  LV                            VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                      
  degraded_upconvert            raid_sanity rwi-a-r-r- 100.00m                                    100.00           degraded_upconvert_rimage_0(0),degraded_upconvert_rimage_1(0)
  [degraded_upconvert_rimage_0] raid_sanity iwi-aor--- 100.00m                                                     /dev/sdd1(1)                                                 
  [degraded_upconvert_rimage_1] raid_sanity vwi-aor-r- 100.00m                                                                                                                  
  [degraded_upconvert_rmeta_0]  raid_sanity ewi-aor---   4.00m                                                     /dev/sdd1(0)                                                 
  [degraded_upconvert_rmeta_1]  raid_sanity ewi-aor-r-   4.00m                                                                                                                  
[root@hayes-03 ~]# pvcreate -ff /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 6 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 6.
  Device /dev/sdp1 excluded by a filter.
[root@hayes-03 ~]# pvremove -ff /dev/sdp1
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 6 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 6.
  Device /dev/sdp1 excluded by a filter.
[root@hayes-03 ~]# pvscan
  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 6 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 6.
  PV /dev/sde1   VG raid_sanity     lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd1   VG raid_sanity     lvm2 [446.62 GiB / <446.52 GiB free]
  PV /dev/sdl1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdj1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdo1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG raid_sanity     lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 9 [13.60 TiB] / in use: 9 [13.60 TiB] / in no VG: 0 [0   ]


Version-Release number of selected component (if applicable):
kernel-4.18.0-127.el8    BUILT: Thu Aug  1 14:38:42 CDT 2019
lvm2-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
lvm2-libs-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
lvm2-dbusd-2.03.05-2.el8    BUILT: Wed Jul 24 08:07:38 CDT 2019
lvm2-lockd-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
cmirror-2.03.05-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
device-mapper-1.02.163-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
device-mapper-libs-1.02.163-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
device-mapper-event-1.02.163-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
device-mapper-event-libs-1.02.163-2.el8    BUILT: Wed Jul 24 08:05:11 CDT 2019
device-mapper-persistent-data-0.8.5-2.el8    BUILT: Wed Jun  5 10:28:04 CDT 2019


How reproducible:
Every time

Comment 1 David Teigland 2019-08-14 17:25:49 UTC
Improved handling of missing/outdated/bad PVs is one of the big improvements in 8.1, but we've not documented it yet, which will be important for the release.

This was the main commit (explaining a lot of internal details in addition to the externally visible issues):
https://sourceware.org/git/?p=lvm2.git;a=commit;h=ba7ff96faff052c6145c71222ea5047a6bcee33b

New tests related to this are:
missing-pv.sh, missing-pv-unused.sh, metadata-old.sh, outdated-pv.sh, metadata-bad-text.sh, metadata-bad-mdaheader.sh

The tests may also be useful in that they should fully characterize the expected behaviors for all variations of missing and bad PVs.

The warning messages above show that a device went missing, and while it was missing it was removed from the VG.  Then the PV reappeared with a copy of the outdated metadata which shows it still belongs to the VG (which it doesn't any more):

  WARNING: ignoring metadata seqno 3 on /dev/sdp1 for seqno 5 on /dev/sdd1 for VG raid_sanity.
  WARNING: Inconsistent metadata found for VG raid_sanity
  WARNING: outdated PV /dev/sdp1 seqno 3 has been removed in current VG raid_sanity seqno 5.

This condition should no longer have any adverse impacts on using the VG, but it does produce the warnings, and the outdated metadata on the PV should be cleared.  Previously, a random lvm command which reads metadata (like 'vgs') would be hijacked to *write* to this outdated PV and clear the outdated metadata from it (this was never a terribly safe idea).  Now, a new command has been introduced to repair this condition, among others:

vgck --updatemetadata <vgname>


Copying a small part of the commit message:

The new command:
vgck --updatemetadata VG

first uses vg_write to repair old metadata, and other basic
issues mentioned above (old metadata, outdated PVs, pv_header
flags, MISSING_PV flags).  It will also go further and repair
bad metadata:

. text metadata that has a bad checksum
. text metadata that is not parsable
. corrupt mda_header checksum and version fields


(Also somewhat related, 8.1 has a new feature for diagnosing PV header/metadata problems via pvck --dump header|metadata|metadata_all <dev>.)

Comment 2 David Teigland 2019-08-14 20:12:41 UTC
Here's a first cut at an outline of changes in this area for users.


PV with old metadata
. If VG metadata is not written to a PV, and the PV has not been removed
  from the VG, this leaves the PV with an old copy of the VG metadata.
. A command that modifies VG metadata (e.g. lvcreate) will automatically
  update the PV with the latest copy of VG metadata. 
. The command 'vgck --updatemetadata VG' will also update the PV with
  the latest VG metadata.

outdated PV 
. If a missing PV is removed from a VG (i.e. with vgreduce), and later
  the PV reappears with its old metadata, LVM will ignore the old PV
  and metadata (but will print a warning about it.)
. The command 'vgck --updatemetadata VG' will clear the outdated
  metadata from the PV.  The PV will become unused.

bad VG metadata text
. If metadata text is damaged in one metadata area, and good metadata
  remains in another metadata area (on the same or another PV), the VG
  continues to be usable, but warnings are printed.
. The command 'vgck --updatemetadata VG' will repair the damaged
  metadata text, replacing it with metadata from another metadata area.

bad VG metadata header
. If the metadata header is damaged in one metadata area, and good
  metadata header/text remains in another metadata area, the VG
  continues to be usable.
. The command 'vgck --updatemetadata VG' will repair the damaged
  metadata header.

Comment 3 David Teigland 2020-06-03 17:41:35 UTC
in master:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=fa9eb76a5dc8d76a1f06a44337f909c75df1fee9

Added vgck man page description for --updatemetadata, and added a pointer to vgck --updatemetadata next to the warnings about inconsistent and outdated metadata.

       --updatemetadata
              Update VG metadata to correct problems.  If VG metadata was
              updated while a PV was missing, and the PV reappears with an old
              version of metadata, then this option (or any other command that
              writes metadata) will update the metadata on the previously
              missing PV. If a PV was removed from a VG while it was missing,
              and the PV reappears, using this option will clear the outdated
              metadata from the previously missing PV. If metadata text is
              damaged on one PV, using this option will replace the damaged
              metadata text. For more severe damage, e.g. with headers, see
              pvck(8).


[ 0:01] #metadata-old.sh:49+ pvs
[ 0:01]   WARNING: ignoring metadata seqno 3 on /tmp/pv2 for seqno 4 on /tmp/pv1 for VG LVMTEST38825vg.
[ 0:01]   WARNING: Inconsistent metadata found for VG LVMTEST38825vg.
[ 0:01]   See vgck --updatemetadata to correct inconsistency.
[ 0:01]   WARNING: VG LVMTEST38825vg was previously updated while PV /tmp/pv2 was missing.
[ 0:01]   WARNING: VG LVMTEST38825vg has unused reappeared PV /tmp/pv2 5JvVEg-VfjB-ME2M-iL9I-QTua-id7R-bZEsG3.
[ 0:01]   PV       VG             Fmt  Attr PSize  PFree 
[ 0:01]   /tmp/pv1 LVMTEST38825vg lvm2 a--  32.00m 28.00m
[ 0:01]   /tmp/pv2 LVMTEST38825vg lvm2 a--  32.00m 32.00m
[ 0:01]   /tmp/pv3 LVMTEST38825vg lvm2 a--  32.00m 32.00m


[ 0:00] #outdated-pv.sh:41+ not pvs /tmp/pv2
[ 0:00]   WARNING: ignoring metadata seqno 3 on /tmp/pv2 for seqno 4 on /tmp/pv1 for VG LVMTEST40640vg.
[ 0:00]   WARNING: Inconsistent metadata found for VG LVMTEST40640vg.
[ 0:00]   See vgck --updatemetadata to correct inconsistency.
[ 0:00]   WARNING: outdated PV /tmp/pv2 seqno 3 has been removed in current VG LVMTEST40640vg seqno 4.
[ 0:00]   See vgck --updatemetadata to clear outdated metadata.
[ 0:00]   Failed to find physical volume "/tmp/pv2".

Comment 14 Corey Marthaler 2021-01-13 00:41:27 UTC
Fix verified in the latest rpms. 

kernel-4.18.0-271.el8    BUILT: Fri Jan  8 03:32:43 CST 2021
lvm2-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020
lvm2-libs-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020


"vgck --updatemetadata" is now used in our raid failure scenarios

[root@host-093 ~]# pvscan
  PV /dev/vda2   VG rhel_host-093   lvm2 [<7.00 GiB / 1.40 GiB free]
  WARNING: ignoring metadata seqno 147 on /dev/sde1 for seqno 148 on /dev/sda1 for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: VG black_bird was missing PV /dev/sde1 fqQMJY-Hi4s-jg2F-XcdM-PqoO-VNuf-nXhqwP.
  PV /dev/sdd1   VG black_bird      lvm2 [<29.99 GiB / 29.48 GiB free]
  PV /dev/sdb1   VG black_bird      lvm2 [<29.99 GiB / <29.99 GiB free]
  PV /dev/sde1   VG black_bird      lvm2 [<29.99 GiB / 29.48 GiB free]
  PV /dev/sdf1   VG black_bird      lvm2 [<29.99 GiB / <29.99 GiB free]
  PV /dev/sdg1   VG black_bird      lvm2 [<29.99 GiB / <29.99 GiB free]
  PV /dev/sdc1   VG black_bird      lvm2 [<29.99 GiB / 29.48 GiB free]
  PV /dev/sdh1   VG black_bird      lvm2 [<29.99 GiB / 29.48 GiB free]
  PV /dev/sda1   VG black_bird      lvm2 [<29.99 GiB / 29.48 GiB free]
  Total: 9 [246.90 GiB] / in use: 9 [246.90 GiB] / in no VG: 0 [0   ]

[root@host-093 ~]# lvs
  WARNING: ignoring metadata seqno 147 on /dev/sde1 for seqno 148 on /dev/sda1 for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: VG black_bird was missing PV /dev/sde1 fqQMJY-Hi4s-jg2F-XcdM-PqoO-VNuf-nXhqwP.
  LV                                             VG            Attr       LSize   Pool   Origin                                       Data%  Meta%  Move Log Cpy%Sync Convert
  synced_random_raid1_3legs_1                    black_bird    rwi---r--- 500.00m                                                                                            
  synced_random_raid1_3legs_1_rimage_1_extracted black_bird    gwi-----p- 500.00m        [synced_random_raid1_3legs_1_rimage_1_iorig]                                        
  synced_random_raid1_3legs_1_rimage_4_imeta     black_bird    -wi-------  12.00m                                                                                            
  synced_random_raid1_3legs_1_rmeta_1_extracted  black_bird    -wi-----p-   4.00m                                                                                            

[root@host-093 ~]#  vgck --updatemetadata black_bird
  WARNING: ignoring metadata seqno 147 on /dev/sde1 for seqno 148 on /dev/sda1 for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: VG black_bird was missing PV /dev/sde1 fqQMJY-Hi4s-jg2F-XcdM-PqoO-VNuf-nXhqwP.
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: updating old metadata to 149 on /dev/sde1 for VG black_bird.
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.

[root@host-093 ~]# echo $?
0
[root@host-093 ~]# lvs -a -o +devices
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: VG black_bird was missing PV /dev/sde1 fqQMJY-Hi4s-jg2F-XcdM-PqoO-VNuf-nXhqwP.
  LV                                             VG            Attr       LSize   Pool   Origin                                       Devices
  synced_random_raid1_3legs_1                    black_bird    rwi---r--- 500.00m                                                     synced_random_raid1_3legs_1_rimage_0(0),synced_random_raid1_3legs_1_rimage_4(0),synced_random_raid1_3legs_1_rimage_2(0),synced_random_raid1_3legs_1_rimage_3(0)
  [synced_random_raid1_3legs_1_rimage_0]         black_bird    gwi---r--- 500.00m        [synced_random_raid1_3legs_1_rimage_0_iorig] synced_random_raid1_3legs_1_rimage_0_iorig(0)
  [synced_random_raid1_3legs_1_rimage_0_imeta]   black_bird    ewi-------  12.00m                                                     /dev/sda1(126)
  [synced_random_raid1_3legs_1_rimage_0_iorig]   black_bird    -wi------- 500.00m                                                     /dev/sda1(1)
  synced_random_raid1_3legs_1_rimage_1_extracted black_bird    gwi-----p- 500.00m        [synced_random_raid1_3legs_1_rimage_1_iorig] synced_random_raid1_3legs_1_rimage_1_iorig(0)
  [synced_random_raid1_3legs_1_rimage_1_imeta]   black_bird    ewi-------  12.00m                                                     /dev/sde1(126)
  [synced_random_raid1_3legs_1_rimage_1_iorig]   black_bird    -wi-----p- 500.00m                                                     /dev/sde1(1)
  [synced_random_raid1_3legs_1_rimage_2]         black_bird    gwi---r-w- 500.00m        [synced_random_raid1_3legs_1_rimage_2_iorig] synced_random_raid1_3legs_1_rimage_2_iorig(0)
  [synced_random_raid1_3legs_1_rimage_2_imeta]   black_bird    ewi-------  12.00m                                                     /dev/sdh1(126)
  [synced_random_raid1_3legs_1_rimage_2_iorig]   black_bird    -wi------- 500.00m                                                     /dev/sdh1(1)
  [synced_random_raid1_3legs_1_rimage_3]         black_bird    gwi---r-w- 500.00m        [synced_random_raid1_3legs_1_rimage_3_iorig] synced_random_raid1_3legs_1_rimage_3_iorig(0)
  [synced_random_raid1_3legs_1_rimage_3_imeta]   black_bird    ewi-------  12.00m                                                     /dev/sdc1(126)
  [synced_random_raid1_3legs_1_rimage_3_iorig]   black_bird    -wi------- 500.00m                                                     /dev/sdc1(1)
  [synced_random_raid1_3legs_1_rimage_4]         black_bird    Iwi---r--- 500.00m                                                     /dev/sdd1(1)
  synced_random_raid1_3legs_1_rimage_4_imeta     black_bird    -wi-------  12.00m                                                     /dev/sdd1(126)
  [synced_random_raid1_3legs_1_rmeta_0]          black_bird    ewi---r---   4.00m                                                     /dev/sda1(0)
  synced_random_raid1_3legs_1_rmeta_1_extracted  black_bird    -wi-----p-   4.00m                                                     /dev/sde1(0)
  [synced_random_raid1_3legs_1_rmeta_2]          black_bird    ewi---r---   4.00m                                                     /dev/sdh1(0)
  [synced_random_raid1_3legs_1_rmeta_3]          black_bird    ewi---r---   4.00m                                                     /dev/sdc1(0)
  [synced_random_raid1_3legs_1_rmeta_4]          black_bird    ewi---r---   4.00m                                                     /dev/sdd1(0)

[root@host-093 ~]# vgs
  WARNING: VG black_bird was previously updated while PV /dev/sde1 was missing.
  WARNING: VG black_bird was missing PV /dev/sde1 fqQMJY-Hi4s-jg2F-XcdM-PqoO-VNuf-nXhqwP.
  VG            #PV #LV #SN Attr   VSize    VFree   
  black_bird      8   4   0 wz-pn- <239.91g <237.39g

Comment 15 Corey Marthaler 2021-01-14 19:46:53 UTC
Quick verification note, depending on the issues, vgck may need to be run multiple times:


[root@hayes-02 ~]# pvscan
  WARNING: ignoring metadata seqno 353 on /dev/sdh for seqno 3068 on /dev/sdb for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: outdated PV /dev/sdh seqno 353 has been removed in current VG black_bird seqno 3068.
  See vgck --updatemetadata to clear outdated metadata.
  PV /dev/sdj   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdn   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdb   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sde   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdl   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdm   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdk   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdc   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 12 [<17.68 TiB] / in use: 12 [<17.68 TiB] / in no VG: 0 [0   ]

[root@hayes-02 ~]# lvs
  WARNING: ignoring metadata seqno 353 on /dev/sdh for seqno 3068 on /dev/sdb for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: outdated PV /dev/sdh seqno 353 has been removed in current VG black_bird seqno 3068.
  See vgck --updatemetadata to clear outdated metadata.

[root@hayes-02 ~]#  vgck --updatemetadata black_bird
  WARNING: ignoring metadata seqno 353 on /dev/sdh for seqno 3068 on /dev/sdb for VG black_bird.
  WARNING: Inconsistent metadata found for VG black_bird.
  See vgck --updatemetadata to correct inconsistency.
  WARNING: outdated PV /dev/sdh seqno 353 has been removed in current VG black_bird seqno 3068.
  See vgck --updatemetadata to clear outdated metadata.
  WARNING: wiping mda on outdated PV /dev/sdh
  WARNING: wiping header on outdated PV /dev/sdh

[root@hayes-02 ~]# vgck --updatemetadata black_bird

[root@hayes-02 ~]# pvscan
  PV /dev/sdj   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdn   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdb   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sde   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdl   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdm   VG black_bird      lvm2 [446.62 GiB / 446.62 GiB free]
  PV /dev/sdd   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdk   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdc   VG black_bird      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh                      lvm2 [<1.82 TiB]
  Total: 13 [<19.50 TiB] / in use: 12 [<17.68 TiB] / in no VG: 1 [<1.82 TiB]

Comment 17 errata-xmlrpc 2021-05-18 15:01:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659


Note You need to log in before you can comment on or make changes to this bug.