Bug 1315491
| Summary: | unable to remove extended thin pool on top of raid - "Unable to reduce RAID LV" | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | ||||
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | ||||
| lvm2 sub component: | Mirroring and RAID (RHEL6) | QA Contact: | cluster-qe <cluster-qe> | ||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||
| Severity: | medium | ||||||
| Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, prockai, tlavigne, zkabelac | ||||
| Version: | 6.8 | Keywords: | Regression | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | lvm2-2.02.143-2.el6 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2016-05-11 01:20:53 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
(need -vvvv from the failing command) Please attach the output from the failing: lvremove -vvvv test Created attachment 1134964 [details]
lvremove -vvvv
Has got broken by this upstream commit: https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=b64703401da1f4bef60579a0b3766c087fcfe96a Working on bugfix. And fixed by this commit upstream: https://www.redhat.com/archives/lvm-devel/2016-March/msg00114.html Somehow this case missed test suite. Verified working again in the latest rpms. 2.6.32-616.el6.x86_64 lvm2-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 lvm2-libs-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 lvm2-cluster-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 udev-147-2.72.el6 BUILT: Tue Mar 1 06:14:05 CST 2016 device-mapper-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-libs-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-event-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-event-libs-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-persistent-data-0.6.2-0.1.rc5.el6 BUILT: Wed Feb 24 07:07:09 CST 2016 cmirror-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 [root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1 Physical volume "/dev/sda1" successfully created Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdc1" successfully created Physical volume "/dev/sdd1" successfully created Physical volume "/dev/sde1" successfully created Physical volume "/dev/sdf1" successfully created Physical volume "/dev/sdg1" successfully created Physical volume "/dev/sdh1" successfully created Volume group "test" successfully created [root@host-116 ~]# lvcreate --type raid1 -m 1 -L 100M -n raid test Logical volume "raid" created. [root@host-116 ~]# lvconvert --thinpool test/raid --yes WARNING: Converting logical volume test/raid to pool's data volume. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raid to thin pool. [root@host-116 ~]# lvextend -l100%FREE test/raid Extending 2 mirror images. Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents). Logical volume raid successfully resized. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(27) raid test twi-a-tz-- 99.86g 0.00 10.64 raid_tdata(0) [raid_tdata] test rwi-aor--- 99.86g 100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(1) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(28) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdc1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sde1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdg1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdb1(1) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdd1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdf1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdh1(0) [raid_tdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_tdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_tmeta] test ewi-ao---- 4.00m /dev/sda1(26) [root@host-116 ~]# lvremove test Do you really want to remove active logical volume raid? [y/n]: y Logical volume "raid" successfully removed Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0964.html |
Description of problem: This appears to be a regression from 6.8 rpms 140-3 and 141-2, and is also in the current version 143-1. ### RHEL 6.8 (141-2) [root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1 Physical volume "/dev/sda1" successfully created Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdc1" successfully created Physical volume "/dev/sdd1" successfully created Physical volume "/dev/sde1" successfully created Physical volume "/dev/sdf1" successfully created Physical volume "/dev/sdg1" successfully created Physical volume "/dev/sdh1" successfully created Volume group "test" successfully created [root@host-116 ~]# lvcreate --type raid1 -m 1 -L 100M -n raid test Logical volume "raid" created. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices raid test rwi-a-r--- 100.00m 100.00 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test iwi-aor--- 100.00m /dev/sda1(1) [raid_rimage_1] test iwi-aor--- 100.00m /dev/sdb1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [root@host-116 ~]# lvconvert --thinpool test/raid --yes WARNING: Converting logical volume test/raid to pool's data volume. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raid to thin pool. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(27) raid test twi-a-tz-- 100.00m 0.00 0.88 raid_tdata(0) [raid_tdata] test rwi-aor--- 100.00m 100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0) [raid_tdata_rimage_0] test iwi-aor--- 100.00m /dev/sda1(1) [raid_tdata_rimage_1] test iwi-aor--- 100.00m /dev/sdb1(1) [raid_tdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_tdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_tmeta] test ewi-ao---- 4.00m /dev/sda1(26) [root@host-116 ~]# lvextend -l100%FREE test/raid Extending 2 mirror images. Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents). Logical volume raid successfully resized. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(27) raid test twi-a-tz-- 100.00m 0.00 0.88 raid_tdata(0) [raid_tdata] test rwi-aor--- 99.86g 100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(1) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(28) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdc1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sde1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdg1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdb1(1) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdd1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdf1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdh1(0) [raid_tdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_tdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_tmeta] test ewi-ao---- 4.00m /dev/sda1(26) [root@host-116 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 test lvm2 a-- 24.99g 0 /dev/sdb1 test lvm2 a-- 24.99g 8.00m /dev/sdc1 test lvm2 a-- 24.99g 0 /dev/sdd1 test lvm2 a-- 24.99g 0 /dev/sde1 test lvm2 a-- 24.99g 0 /dev/sdf1 test lvm2 a-- 24.99g 0 /dev/sdg1 test lvm2 a-- 24.99g 100.00m /dev/sdh1 test lvm2 a-- 24.99g 100.00m [root@host-116 ~]# lvremove test Do you really want to remove active logical volume raid? [y/n]: y Unable to reduce RAID LV - operation not implemented. Error releasing logical volume "raid" ### RHEL 6.8 (140-3) [root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1 Physical volume "/dev/sda1" successfully created Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdc1" successfully created Physical volume "/dev/sdd1" successfully created Physical volume "/dev/sde1" successfully created Physical volume "/dev/sdf1" successfully created Physical volume "/dev/sdg1" successfully created Physical volume "/dev/sdh1" successfully created Volume group "test" successfully created [root@host-116 ~]# lvcreate --type raid1 -m 1 -L 100M -n raid test Logical volume "raid" created. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices raid test rwi-a-r--- 100.00m 100.00 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test iwi-aor--- 100.00m /dev/sda1(1) [raid_rimage_1] test iwi-aor--- 100.00m /dev/sdb1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [root@host-116 ~]# lvconvert --thinpool test/raid --yes WARNING: Converting logical volume test/raid to pool's data volume. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raid to thin pool. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(27) raid test twi-a-tz-- 100.00m 0.00 0.88 raid_tdata(0) [raid_tdata] test rwi-aor--- 100.00m 100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0) [raid_tdata_rimage_0] test iwi-aor--- 100.00m /dev/sda1(1) [raid_tdata_rimage_1] test iwi-aor--- 100.00m /dev/sdb1(1) [raid_tdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_tdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_tmeta] test ewi-ao---- 4.00m /dev/sda1(26) [root@host-116 ~]# lvextend -l100%FREE test/raid Extending 2 mirror images. Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents). Logical volume raid successfully resized. [root@host-116 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(27) raid test twi-a-tz-- 99.86g 0.00 10.64 raid_tdata(0) [raid_tdata] test rwi-aor--- 99.86g 100.00 raid_tdata_rimage_0(0),raid_tdata_rimage_1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(1) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sda1(28) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdc1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sde1(0) [raid_tdata_rimage_0] test iwi-aor--- 99.86g /dev/sdg1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdb1(1) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdd1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdf1(0) [raid_tdata_rimage_1] test iwi-aor--- 99.86g /dev/sdh1(0) [raid_tdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_tdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_tmeta] test ewi-ao---- 4.00m /dev/sda1(26) [root@host-116 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 test lvm2 a-- 24.99g 0 /dev/sdb1 test lvm2 a-- 24.99g 8.00m /dev/sdc1 test lvm2 a-- 24.99g 0 /dev/sdd1 test lvm2 a-- 24.99g 0 /dev/sde1 test lvm2 a-- 24.99g 0 /dev/sdf1 test lvm2 a-- 24.99g 0 /dev/sdg1 test lvm2 a-- 24.99g 100.00m /dev/sdh1 test lvm2 a-- 24.99g 100.00m [root@host-116 ~]# lvremove test Do you really want to remove active logical volume raid? [y/n]: y Logical volume "raid" successfully removed