Bug 1264972
| Summary: | unable to resize thin meta volume residing on a shared VG | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | ||||||
| Component: | lvm2 | Assignee: | David Teigland <teigland> | ||||||
| lvm2 sub component: | LVM lock daemon / lvmlockd | QA Contact: | cluster-qe <cluster-qe> | ||||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||||
| Severity: | medium | ||||||||
| Priority: | high | CC: | agk, coughlan, heinzm, jbrassow, jkachuck, prajnoha, rbednar, rsussman, teigland, zkabelac | ||||||
| Version: | 7.2 | ||||||||
| Target Milestone: | rc | ||||||||
| Target Release: | 7.3 | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | lvm2-2.02.156-1.el7 | Doc Type: | Bug Fix | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2016-11-04 04:10:43 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 1295577, 1313485, 1364088 | ||||||||
| Attachments: |
|
||||||||
|
Description
Corey Marthaler
2015-09-21 18:57:04 UTC
Created attachment 1075566 [details]
-vvvv of the lvextend attempt
It looks like there's an improved syntax for resizing the metadata device, which does work in a shared vg: lvextend --poolmetdatasize <size> vg/pool Thanks, I'll update the tests to use the improved syntax. [root@harding-03 ~]# lvextend --poolmetadatasize +100M snapper_thinp/resize Extending logical volume resize_tmeta to 104.00 MiB. Logical volume resize successfully resized. Adding QA ACK for 7.3. Also it would be nice to have this added to manpages. works with --poolmetadatasize Created attachment 1168302 [details]
Test result
Verified. Full test results in attachment. [snapper_thinp_raid1_thin_pool_resize] SCENARIO - [thin_pool_resize] ... [snapper_thinp_thin_pool_resize] Extending thin pool meta data volume... [snapper_thinp_thin_pool_resize] lvextend --poolmetadatasize +100M snapper_thinp/resize ... ------------------- Summary --------------------- Testcase Result -------- ------ snapper_thinp_raid10_thin_pool_resize PASS snapper_thinp_thin_pool_resize PASS snapper_thinp_raid1_thin_pool_resize PASS snapper_thinp_raid10_thin_pool_resize PASS snapper_thinp_thin_pool_resize PASS snapper_thinp_raid1_thin_pool_resize PASS ================================================= Tested with: 3.10.0-429.el7.x86_64 lvm2-2.02.156-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 lvm2-libs-2.02.156-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 lvm2-cluster-2.02.156-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 device-mapper-1.02.126-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 device-mapper-libs-1.02.126-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 device-mapper-event-1.02.126-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 device-mapper-event-libs-1.02.126-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 device-mapper-persistent-data-0.6.2-0.1.rc8.el7 BUILT: Wed May 4 09:56:34 CEST 2016 cmirror-2.02.156-1.el7 BUILT: Mon Jun 13 10:05:51 CEST 2016 (In reply to Roman Bednář from comment #7) > Created attachment 1168302 [details] > Test result That test does not appear to use a shared VG. Here is a proper regression check on shared storage with the latest 7.3 rpms showing that 'lvresize --poolmetadatasize' continues to work and 'lvextend -L' continues to *not* work (as expected).
3.10.0-418.el7.x86_64
lvm2-2.02.152-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
lvm2-libs-2.02.152-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
lvm2-cluster-2.02.152-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
device-mapper-1.02.124-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
device-mapper-libs-1.02.124-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
device-mapper-event-1.02.124-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
device-mapper-event-libs-1.02.124-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc8.el7 BUILT: Wed May 4 02:56:34 CDT 2016
cmirror-2.02.152-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
sanlock-3.3.0-1.el7 BUILT: Wed Feb 24 09:52:30 CST 2016
sanlock-lib-3.3.0-1.el7 BUILT: Wed Feb 24 09:52:30 CST 2016
lvm2-lockd-2.02.152-2.el7 BUILT: Thu May 5 02:33:28 CDT 2016
Turning on lvmlockd in lvm.conf
mckinley-01 mckinley-02 mckinley-03 mckinley-04
Starting sanlock on
mckinley-01 mckinley-02 mckinley-03 mckinley-04
Starting lvmlockd on
mckinley-01 mckinley-02 mckinley-03 mckinley-04
setting up first "global lock" dummy vg for lvmlockd...
vgcreate --shared global /dev/mapper/mpathh1
mckinley-01: vgchange --lock-start global
mckinley-02: vgchange --lock-start global
Skipping global lock: lockspace not found or started
mckinley-03: vgchange --lock-start global
Skipping global lock: lockspace not found or started
mckinley-04: vgchange --lock-start global
Skipping global lock: lockspace not found or started
creating lvm devices...
mckinley-01: pvcreate /dev/mapper/mpatha1 /dev/mapper/mpathe1 /dev/mapper/mpathc1 /dev/mapper/mpathb1 /dev/mapper/mpathf1
mckinley-01: vgcreate --shared snapper_thinp /dev/mapper/mpatha1 /dev/mapper/mpathe1 /dev/mapper/mpathc1 /dev/mapper/mpathb1 /dev/mapper/mpathf1
mckinley-01: vgchange --lock-start snapper_thinp
mckinley-02: vgchange --lock-start snapper_thinp
mckinley-03: vgchange --lock-start snapper_thinp
mckinley-04: vgchange --lock-start snapper_thinp
============================================================
Iteration 1 of 1 started at Wed Jun 15 11:35:42 CDT 2016
============================================================
SCENARIO - [resize_pool_meta_device]
Create an XFS filesystem, mount it, snapshot it, and attempt to resize it's pool meta device while online
Making pool volume
lvcreate --activate ey --thinpool POOL -L 2G --profile thin-performance --zero n --poolmetadatasize 4M snapper_thinp
Making origin volume
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n other1
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other2
WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB)!
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other3
WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB)!
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other4
WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB)!
lvcreate --activate ey -V 1G -T snapper_thinp/POOL -n other5
WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB)!
Making snapshot of origin volume
lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n meta_resize
[root@mckinley-02 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
[lvmlock] global -wi-ao---- 256.00m /dev/mapper/mpathc1(0)
POOL snapper_thinp twi-aot--- 2.00g 0.21 1.76 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 2.00g /dev/mapper/mpathe1(65)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/mapper/mpathf1(0)
[lvmlock] snapper_thinp -wi-ao---- 256.00m /dev/mapper/mpathe1(0)
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/mapper/mpathe1(64)
meta_resize snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.37
origin snapper_thinp Vwi-aot--- 1.00g POOL 0.37
other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
[root@mckinley-02 ~]# lvresize --poolmetadatasize +40M /dev/snapper_thinp/POOL
Extending logical volume snapper_thinp/POOL_tmeta to 44.00 MiB.
Logical volume POOL successfully resized.
[root@mckinley-02 ~]# lvextend -L +100M snapper_thinp/POOL_tmeta
Lock on incorrect thin lv type snapper_thinp/POOL_tmeta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html |