Hide Forgot
Description of problem: # latest rpm: lvm2-2.02.164-2.el7.x86_64 # raid10 volume [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices POOL test rwi-a-r--- 4.00g 100.00 POOL_rimage_0(0),POOL_rimage_1(0),POOL_rimage_2(0),POOL_rimage_3(0) [POOL_rimage_0] test iwi-aor--- 2.00g /dev/sda1(1) [POOL_rimage_1] test iwi-aor--- 2.00g /dev/sdb1(1) [POOL_rimage_2] test iwi-aor--- 2.00g /dev/sdc1(1) [POOL_rimage_3] test iwi-aor--- 2.00g /dev/sdd1(1) [POOL_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [POOL_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [POOL_rmeta_2] test ewi-aor--- 4.00m /dev/sdc1(0) [POOL_rmeta_3] test ewi-aor--- 4.00m /dev/sdd1(0) # convert to raid1 attempt [root@host-116 ~]# lvconvert --type raid1 -m 1 test/POOL LV POOL invalid: minimum 4 areas required (is 2) for raid10 segment Internal error: LV segments corrupted in POOL. Failed to write changes to POOL in test # convert to mirror attempt [root@host-116 ~]# lvconvert --type mirror -m 1 test/POOL Conversion operation not yet supported. # older rpm: lvm2-2.02.160-1.el7.x86_64 # raid10 volume [root@host-117 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices POOL test rwi-a-r--- 2.00g 100.00 POOL_rimage_0(0),POOL_rimage_1(0),POOL_rimage_2(0),POOL_rimage_3(0) [POOL_rimage_0] test iwi-aor--- 1.00g /dev/sda1(1) [POOL_rimage_1] test iwi-aor--- 1.00g /dev/sdb1(1) [POOL_rimage_2] test iwi-aor--- 1.00g /dev/sdc1(1) [POOL_rimage_3] test iwi-aor--- 1.00g /dev/sdd1(1) [POOL_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [POOL_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [POOL_rmeta_2] test ewi-aor--- 4.00m /dev/sdc1(0) [POOL_rmeta_3] test ewi-aor--- 4.00m /dev/sdd1(0) # convert to raid1 attempt [root@host-117 ~]# lvconvert --type raid1 -m 1 test/POOL device-mapper: reload ioctl on (253:10) failed: Input/output error Failed to suspend test/POOL before committing changes # convert to mirror attempt [root@host-117 ~]# lvconvert --type mirror -m 1 test/POOL WARNING: Reading VG test from disk because lvmetad metadata is invalid. device-mapper: reload ioctl on (253:10) failed: Input/output error Failed to suspend test/POOL before committing changes
It's like the problems we've seen elsewhere. The -m is dominating and it's not even looking at the --type and trying to change the number of mirrors on raid10 when we only support 2.
And doesn't look like a regression - this wasn't working properly before this last set of changes either, albeit hitting a different failure mode.
Fixed in next release. (The contents of the error messages when you attempt something disallowed remain a bit inconsistent at times, but that level of tidy up will have to wait for RHEL 7.4.)
The new message is simply: --mirrors/-m cannot be changed with raid10.
Fix verified in the latest rpms. lvm2-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 lvm2-libs-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 lvm2-cluster-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 [root@host-117 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices POOL test rwi-a-r--- 2.00g 100.00 POOL_rimage_0(0),POOL_rimage_1(0),POOL_rimage_2(0),POOL_rimage_3(0) [POOL_rimage_0] test iwi-aor--- 1.00g /dev/sda1(1) [POOL_rimage_1] test iwi-aor--- 1.00g /dev/sdb1(1) [POOL_rimage_2] test iwi-aor--- 1.00g /dev/sdc1(1) [POOL_rimage_3] test iwi-aor--- 1.00g /dev/sdd1(1) [POOL_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [POOL_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [POOL_rmeta_2] test ewi-aor--- 4.00m /dev/sdc1(0) [POOL_rmeta_3] test ewi-aor--- 4.00m /dev/sdd1(0) test-POOL: 0 4194304 raid raid10 4 AAAA 4194304/4194304 idle 0 0 [root@host-117 ~]# lvconvert --type raid1 -m 1 test/POOL --mirrors/-m cannot be changed with raid10. [root@host-117 ~]# lvconvert --type mirror -m 1 test/POOL --mirrors/-m cannot be changed with raid10. [root@host-117 ~]# lvconvert --type raid1 -m 2 test/POOL --mirrors/-m cannot be changed with raid10. [root@host-117 ~]# lvconvert --type mirror -m 2 test/POOL --mirrors/-m cannot be changed with raid10.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html