RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1509041 - request to add warning when using vgcfgrestore against activated volume
Summary: request to add warning when using vgcfgrestore against activated volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1546181
TreeView+ depends on / blocked
 
Reported: 2017-11-02 19:09 UTC by John Pittman
Modified: 2021-12-10 15:22 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.180-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:02:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3193 0 None None None 2018-10-30 11:03:20 UTC

Description John Pittman 2017-11-02 19:09:09 UTC
Description of problem:

On-disk metadata and the kernel can get out of sync when vgcfgrestore is run against an activated volume.  A warning needs to be put into place 
Version-Release number of selected component (if applicable):

How reproducible:

- Create striped volume group with logical volumes that contain multiple segments:

[root@localhost ~]# vgcreate stripe_vg /dev/sdb /dev/sdc
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.
  Volume group "stripe_vg" successfully created
 
[root@localhost ~]# lvcreate -n stripe_lv -i 2 -l 100 stripe_vg
  Using default stripesize 64.00 KiB.
  Logical volume "stripe_lv" created.
 
[root@localhost ~]# lvcreate -n stripe_lv2 -i 2 -l 100 stripe_vg
  Using default stripesize 64.00 KiB.
  Logical volume "stripe_lv2" created.
 
[root@localhost ~]# lvextend -l +100 stripe_vg/stripe_lv
  Using stripesize of last segment 64.00 KiB
  Size of logical volume stripe_vg/stripe_lv changed from 400.00 MiB (100 extents) to 800.00 MiB (200 extents).
  Logical volume stripe_vg/stripe_lv successfully resized.
 
[root@localhost ~]# lvextend -l +100%FREE stripe_vg/stripe_lv
  Using stripesize of last segment 64.00 KiB
  Size of logical volume stripe_vg/stripe_lv changed from 800.00 MiB (200 extents) to 1.60 GiB (410 extents).
  Logical volume stripe_vg/stripe_lv successfully resized.

- Oops, wanted stripe_lv2 to be segmented as well, have to roll back.  Use vgcfgrestore to go back before the lvextend.
 
[root@localhost ~]# grep description /etc/lvm/archive/stripe_vg_00018-1652354449.vg
description = "Created *before* executing 'lvextend -l +100%FREE stripe_vg/stripe_lv'"
 
[root@localhost ~]# vgcfgrestore --file /etc/lvm/archive/stripe_vg_00018-1652354449.vg stripe_vg
  Restored volume group stripe_vg
 
- Observe unknown (X) values in lvm output:

[root@localhost ~]# lvs -o +devices,seg_pe_ranges
  WARNING: Cannot find matching striped segment for stripe_vg/stripe_lv.
  LV         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                     PE Ranges                        
  root       rhel      -wi-ao----  <6.20g                                                     /dev/sda2(205)              /dev/sda2:205-1790              
  swap       rhel      -wi-ao---- 820.00m                                                     /dev/sda2(0)                /dev/sda2:0-204                  
  stripe_lv  stripe_vg -wi-a----- 800.00m                                                     /dev/sdb(0),/dev/sdc(0)     /dev/sdb:0-49 /dev/sdc:0-49      
  stripe_lv  stripe_vg -wi-XX--X- 800.00m                                                     /dev/sdb(100),/dev/sdc(100) /dev/sdb:100-149 /dev/sdc:100-149
  stripe_lv2 stripe_vg -wi-a----- 400.00m                                                     /dev/sdb(50),/dev/sdc(50)   /dev/sdb:50-99 /dev/sdc:50-99    
 
Actual results:

On-disk metadata and kernel become out of sync due to the vgcfgrestore command.

Expected results:

Warnings should be given if active volume groups have the vgcfgrestore command run against them.  Possibly a [yes/no] prompt or a requirement to --force the command.

Comment 3 John Pittman 2017-11-02 19:15:16 UTC
Description of problem:

On-disk metadata and the kernel can get out of sync when vgcfgrestore is run against an activated volume.  A warning needs to be put into place to prompt/warn user or require --force.

Version-Release number of selected component (if applicable):

device-mapper-1.02.140-8.el7.x86_64
lvm2-2.02.171-8.el7.x86_64
kernel-3.10.0-693.2.2.el7.x86_64

Comment 5 Corey Marthaler 2018-04-16 16:28:11 UTC
## Appears the warning already exists for virt thin origin volumes.

[root@host-073 ~]# vgcfgrestore -f /etc/lvm/archive/snapper_thinp_00013-1327125970.vg snapper_thinp
  Consider using option --force to restore Volume Group snapper_thinp with thin volumes.
  Restore failed.



## However, should the "--force" succeed even when active? Or should LVM require the thin and pool be inactive here?

[root@host-073 ~]# vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00013-1327125970.vg snapper_thinp
  WARNING: Forced restore of Volume Group snapper_thinp with thin volumes.
  Restored volume group snapper_thinp

[root@host-073 ~]# lvs -a -o +devices
  WARNING: Cannot find matching thin segment for snapper_thinp/active_restore.
  LV              VG            Attr       LSize   Pool  Origin Data%  Meta%   Devices
  POOL            snapper_thinp twi-aotz--   1.00g              0.00   1.56    POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao----   1.00g                             /dev/sda3(1)
  [POOL_tmeta]    snapper_thinp ewi-ao----   4.00m                             /dev/sdb2(0)
  active_restore  snapper_thinp Vwi-XXtzX-   1.00g POOL  origin
  [lvol0_pmspare] snapper_thinp ewi-------   4.00m                             /dev/sda3(0)
  origin          snapper_thinp Vwi-a-tz--   1.00g POOL         0.00

Comment 8 Red Hat Bugzilla Rules Engine 2018-06-05 17:25:20 UTC
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.

Comment 11 Zdenek Kabelac 2018-06-27 13:53:26 UTC
Upstream patches  will be soon merged into stable branch for 7.6

https://www.redhat.com/archives/lvm-devel/2018-June/msg00202.html

and one for test suite:

https://www.redhat.com/archives/lvm-devel/2018-June/msg00206.html

Comment 14 Corey Marthaler 2018-07-26 19:22:09 UTC
Fix verified in the latest rpms.

3.10.0-925.el7.x86_64

lvm2-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-libs-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-cluster-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
lvm2-python-boom-0.9-4.el7    BUILT: Fri Jul 20 12:23:30 CDT 2018
cmirror-2.02.180-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-libs-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-event-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-event-libs-1.02.149-1.el7    BUILT: Fri Jul 20 12:21:35 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017


# THIN VIRTS
SCENARIO - [vgcfgrestore_protection_against_active]
Create a thin pool, extend it, attempt to restore it (while active) to before the extend using the archive file
Making pool volume
lvcreate  --thinpool POOL -L 1G  --zero n --poolmetadatasize 4M snapper_thinp

Sanity checking pool device (POOL) metadata
thin_check /dev/mapper/snapper_thinp-meta_swap.305
examining superblock
examining devices tree
examining mapping tree
checking space map counts

Making origin volume
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate  -V 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate  -V 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
lvcreate  -V 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).
Making snapshot of origin volume
lvcreate  -y -k n -s /dev/snapper_thinp/origin -n active_restore
lvextend -L +200M snapper_thinp/active_restore
  WARNING: Sum of all thin volume sizes (<7.20 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB).

Grabbing the latest VG archive file, and attempting to restore
vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00102-734237907.vg snapper_thinp
  WARNING: Found 11 active volume(s) in volume group "snapper_thinp".
Do you really want to proceed with restore of volume group "snapper_thinp", while 11 volume(s) are active? [y/n]: [n]
  Restore aborted.
Checking syslog to see if vgcfgrestore segfaulted

Deactivating origin/snap volume(s)
lvchange -an snapper_thinp/active_restore
vgchange -an snapper_thinp

# Note a force is required when thin virts are present
vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00102-734237907.vg snapper_thinp
  WARNING: Forced restore of Volume Group snapper_thinp with thin volumes.
  Scan of VG snapper_thinp from /dev/sda1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999
  Scan of VG snapper_thinp from /dev/sdb1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999
  Scan of VG snapper_thinp from /dev/sdc1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999
  Scan of VG snapper_thinp from /dev/sdd1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999
  Scan of VG snapper_thinp from /dev/sde1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999
Activating origin/snap volume(s)

Removing snap volume snapper_thinp/active_restore
lvremove -f /dev/snapper_thinp/active_restore
Removing thin origin and other virtual thin volumes
Removing pool snapper_thinp/POOL



# RAID
SCENARIO (raid1) - [vgcfgrestore_protection_against_active]
Create a raid, extend it, attempt to restore it to before the extend using the archive file
host-087: lvcreate  --nosync --type raid1 -m 1 -n active_restore -L 100M raid_sanity
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
lvextend -L +200M raid_sanity/active_restore

Grabbing the latest VG archive file, and attempting to restore
vgcfgrestore --force -f /etc/lvm/archive/raid_sanity_00151-2074633835.vg raid_sanity
  WARNING: Found 5 active volume(s) in volume group "raid_sanity".
Do you really want to proceed with restore of volume group "raid_sanity", while 5 volume(s) are active? [y/n]: [n]
  Restore aborted.
Checking syslog to see if vgcfgrestore segfaulted

Deactivating active_restore raid
vgcfgrestore -f /etc/lvm/archive/raid_sanity_00151-2074633835.vg raid_sanity
  Scan of VG raid_sanity from /dev/sda1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sda2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdb1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdb2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdc1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdc2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdd1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sdd2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sde1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
  Scan of VG raid_sanity from /dev/sde2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980
Activating active_restore raid

Deactivating raid active_restore... and removing

Comment 15 loberman 2018-07-26 19:23:51 UTC
Great, Thanks

Comment 17 errata-xmlrpc 2018-10-30 11:02:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193


Note You need to log in before you can comment on or make changes to this bug.