Bug 1315491 - unable to remove extended thin pool on top of raid - "Unable to reduce RAID LV"
Summary: unable to remove extended thin pool on top of raid - "Unable to reduce RAID LV"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-07 21:57 UTC by Corey Marthaler
Modified: 2016-05-11 01:20 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.143-2.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-11 01:20:53 UTC
Target Upstream Version:


Attachments (Terms of Use)
lvremove -vvvv (86.59 KB, text/plain)
2016-03-10 17:18 UTC, Corey Marthaler
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0964 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-05-10 22:57:40 UTC

Description Corey Marthaler 2016-03-07 21:57:06 UTC
Description of problem:
This appears to be a regression from 6.8 rpms 140-3 and 141-2, and is also in the current version 143-1.


### RHEL 6.8 (141-2)                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                    
[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1                                                                                                                                                                                                                 
  Physical volume "/dev/sda1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdb1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdc1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdd1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sde1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdf1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created
[root@host-116 ~]#  lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.
[root@host-116 ~]# lvs -a -o +devices
  LV              VG     Attr       LSize   Pool Origin Data%  Meta%  Cpy%Sync Devices
  raid            test   rwi-a-r--- 100.00m                           100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test   iwi-aor--- 100.00m                                    /dev/sda1(1)
  [raid_rimage_1] test   iwi-aor--- 100.00m                                    /dev/sdb1(1)
  [raid_rmeta_0]  test   ewi-aor---   4.00m                                    /dev/sda1(0)
  [raid_rmeta_1]  test   ewi-aor---   4.00m                                    /dev/sdb1(0)
[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor--- 100.00m                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_tdata_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)

[root@host-116 ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree  
  /dev/sda1  test   lvm2 a--  24.99g      0 
  /dev/sdb1  test   lvm2 a--  24.99g   8.00m
  /dev/sdc1  test   lvm2 a--  24.99g      0 
  /dev/sdd1  test   lvm2 a--  24.99g      0 
  /dev/sde1  test   lvm2 a--  24.99g      0 
  /dev/sdf1  test   lvm2 a--  24.99g      0 
  /dev/sdg1  test   lvm2 a--  24.99g 100.00m
  /dev/sdh1  test   lvm2 a--  24.99g 100.00m

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Unable to reduce RAID LV - operation not implemented.
  Error releasing logical volume "raid"








### RHEL 6.8 (140-3)

[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sdf1" successfully created
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created
[root@host-116 ~]# lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.
[root@host-116 ~]# lvs -a -o +devices
  LV              VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  raid            test   rwi-a-r--- 100.00m                          100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor--- 100.00m                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_tdata_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz--  99.86g             0.00   10.64          raid_tdata(0)
  [raid_tdata]          test   rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree  
  /dev/sda1  test   lvm2 a--  24.99g      0 
  /dev/sdb1  test   lvm2 a--  24.99g   8.00m
  /dev/sdc1  test   lvm2 a--  24.99g      0 
  /dev/sdd1  test   lvm2 a--  24.99g      0 
  /dev/sde1  test   lvm2 a--  24.99g      0 
  /dev/sdf1  test   lvm2 a--  24.99g      0 
  /dev/sdg1  test   lvm2 a--  24.99g 100.00m
  /dev/sdh1  test   lvm2 a--  24.99g 100.00m

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Logical volume "raid" successfully removed

Comment 3 Alasdair Kergon 2016-03-10 11:20:16 UTC
(need -vvvv from the failing command)

Comment 4 Alasdair Kergon 2016-03-10 11:25:38 UTC
Please attach the output from the failing:
  lvremove -vvvv test

Comment 7 Corey Marthaler 2016-03-10 17:18:21 UTC
Created attachment 1134964 [details]
lvremove -vvvv

Comment 10 Zdenek Kabelac 2016-03-15 16:55:31 UTC
Has got  broken by this upstream commit:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=b64703401da1f4bef60579a0b3766c087fcfe96a


Working on bugfix.

Comment 11 Zdenek Kabelac 2016-03-15 22:34:55 UTC
And fixed by this commit upstream:

https://www.redhat.com/archives/lvm-devel/2016-March/msg00114.html

Somehow this case missed test suite.

Comment 13 Corey Marthaler 2016-03-16 20:58:48 UTC
Verified working again in the latest rpms.

2.6.32-616.el6.x86_64
lvm2-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
lvm2-libs-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
lvm2-cluster-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
udev-147-2.72.el6    BUILT: Tue Mar  1 06:14:05 CST 2016
device-mapper-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-libs-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-event-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-event-libs-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016






[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1        
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sdf1" successfully created
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created

[root@host-116 ~]# lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.

[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.

[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.

[root@host-116 ~]# lvs -a -o +devices
  LV                    VG   Attr         LSize Pool Origin Data% Meta%  Cpy%Sync Devices
  [lvol0_pmspare]       test ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test twi-a-tz--  99.86g             0.00  10.64           raid_tdata(0)
  [raid_tdata]          test rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test ewi-ao----   4.00m                                   /dev/sda1(26)

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Logical volume "raid" successfully removed

Comment 15 errata-xmlrpc 2016-05-11 01:20:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html


Note You need to log in before you can comment on or make changes to this bug.