Bug 1509041
Summary: | request to add warning when using vgcfgrestore against activated volume | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | John Pittman <jpittman> |
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | agk, cmarthal, coughlan, heinzm, jbrassow, loberman, msnitzer, prajnoha, rhandlin, thornber, zkabelac |
Version: | 7.4 | Keywords: | Reopened |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.180-1.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-30 11:02:26 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1546181 |
Description
John Pittman
2017-11-02 19:09:09 UTC
Description of problem: On-disk metadata and the kernel can get out of sync when vgcfgrestore is run against an activated volume. A warning needs to be put into place to prompt/warn user or require --force. Version-Release number of selected component (if applicable): device-mapper-1.02.140-8.el7.x86_64 lvm2-2.02.171-8.el7.x86_64 kernel-3.10.0-693.2.2.el7.x86_64 ## Appears the warning already exists for virt thin origin volumes. [root@host-073 ~]# vgcfgrestore -f /etc/lvm/archive/snapper_thinp_00013-1327125970.vg snapper_thinp Consider using option --force to restore Volume Group snapper_thinp with thin volumes. Restore failed. ## However, should the "--force" succeed even when active? Or should LVM require the thin and pool be inactive here? [root@host-073 ~]# vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00013-1327125970.vg snapper_thinp WARNING: Forced restore of Volume Group snapper_thinp with thin volumes. Restored volume group snapper_thinp [root@host-073 ~]# lvs -a -o +devices WARNING: Cannot find matching thin segment for snapper_thinp/active_restore. LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aotz-- 1.00g 0.00 1.56 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sda3(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb2(0) active_restore snapper_thinp Vwi-XXtzX- 1.00g POOL origin [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda3(0) origin snapper_thinp Vwi-a-tz-- 1.00g POOL 0.00 Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. Upstream patches will be soon merged into stable branch for 7.6 https://www.redhat.com/archives/lvm-devel/2018-June/msg00202.html and one for test suite: https://www.redhat.com/archives/lvm-devel/2018-June/msg00206.html Patches cherry-picked into stable branch: https://www.redhat.com/archives/lvm-devel/2018-July/msg00009.html https://www.redhat.com/archives/lvm-devel/2018-July/msg00010.html Fix verified in the latest rpms. 3.10.0-925.el7.x86_64 lvm2-2.02.180-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 lvm2-libs-2.02.180-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 lvm2-cluster-2.02.180-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 lvm2-python-boom-0.9-4.el7 BUILT: Fri Jul 20 12:23:30 CDT 2018 cmirror-2.02.180-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 device-mapper-1.02.149-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 device-mapper-libs-1.02.149-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 device-mapper-event-1.02.149-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 device-mapper-event-libs-1.02.149-1.el7 BUILT: Fri Jul 20 12:21:35 CDT 2018 device-mapper-persistent-data-0.7.3-3.el7 BUILT: Tue Nov 14 05:07:18 CST 2017 # THIN VIRTS SCENARIO - [vgcfgrestore_protection_against_active] Create a thin pool, extend it, attempt to restore it (while active) to before the extend using the archive file Making pool volume lvcreate --thinpool POOL -L 1G --zero n --poolmetadatasize 4M snapper_thinp Sanity checking pool device (POOL) metadata thin_check /dev/mapper/snapper_thinp-meta_swap.305 examining superblock examining devices tree examining mapping tree checking space map counts Making origin volume lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other1 WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other2 WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate -V 1G -T snapper_thinp/POOL -n other3 WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate -V 1G -T snapper_thinp/POOL -n other4 WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). lvcreate -V 1G -T snapper_thinp/POOL -n other5 WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). Making snapshot of origin volume lvcreate -y -k n -s /dev/snapper_thinp/origin -n active_restore lvextend -L +200M snapper_thinp/active_restore WARNING: Sum of all thin volume sizes (<7.20 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB). Grabbing the latest VG archive file, and attempting to restore vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00102-734237907.vg snapper_thinp WARNING: Found 11 active volume(s) in volume group "snapper_thinp". Do you really want to proceed with restore of volume group "snapper_thinp", while 11 volume(s) are active? [y/n]: [n] Restore aborted. Checking syslog to see if vgcfgrestore segfaulted Deactivating origin/snap volume(s) lvchange -an snapper_thinp/active_restore vgchange -an snapper_thinp # Note a force is required when thin virts are present vgcfgrestore --force -f /etc/lvm/archive/snapper_thinp_00102-734237907.vg snapper_thinp WARNING: Forced restore of Volume Group snapper_thinp with thin volumes. Scan of VG snapper_thinp from /dev/sda1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999 Scan of VG snapper_thinp from /dev/sdb1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999 Scan of VG snapper_thinp from /dev/sdc1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999 Scan of VG snapper_thinp from /dev/sdd1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999 Scan of VG snapper_thinp from /dev/sde1 found mda_checksum ef951ab8 mda_size 4999 vs previous 23e3f81b 4999 Activating origin/snap volume(s) Removing snap volume snapper_thinp/active_restore lvremove -f /dev/snapper_thinp/active_restore Removing thin origin and other virtual thin volumes Removing pool snapper_thinp/POOL # RAID SCENARIO (raid1) - [vgcfgrestore_protection_against_active] Create a raid, extend it, attempt to restore it to before the extend using the archive file host-087: lvcreate --nosync --type raid1 -m 1 -n active_restore -L 100M raid_sanity WARNING: New raid1 won't be synchronised. Don't read what you didn't write! lvextend -L +200M raid_sanity/active_restore Grabbing the latest VG archive file, and attempting to restore vgcfgrestore --force -f /etc/lvm/archive/raid_sanity_00151-2074633835.vg raid_sanity WARNING: Found 5 active volume(s) in volume group "raid_sanity". Do you really want to proceed with restore of volume group "raid_sanity", while 5 volume(s) are active? [y/n]: [n] Restore aborted. Checking syslog to see if vgcfgrestore segfaulted Deactivating active_restore raid vgcfgrestore -f /etc/lvm/archive/raid_sanity_00151-2074633835.vg raid_sanity Scan of VG raid_sanity from /dev/sda1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sda2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdb1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdb2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdc1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdc2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdd1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sdd2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sde1 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Scan of VG raid_sanity from /dev/sde2 found mda_checksum 5a54c40b mda_size 3980 vs previous 99d9d6b0 3980 Activating active_restore raid Deactivating raid active_restore... and removing Great, Thanks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3193 |