Bug 1315491 - unable to remove extended thin pool on top of raid - "Unable to reduce RAID LV"
unable to remove extended thin pool on top of raid - "Unable to reduce RAID LV"
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.8
x86_64 Linux
unspecified Severity medium
: rc
: ---
Assigned To: Zdenek Kabelac
cluster-qe@redhat.com
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-07 16:57 EST by Corey Marthaler
Modified: 2016-05-10 21:20 EDT (History)
9 users (show)

See Also:
Fixed In Version: lvm2-2.02.143-2.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-10 21:20:53 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
lvremove -vvvv (86.59 KB, text/plain)
2016-03-10 12:18 EST, Corey Marthaler
no flags Details

  None (edit)
Description Corey Marthaler 2016-03-07 16:57:06 EST
Description of problem:
This appears to be a regression from 6.8 rpms 140-3 and 141-2, and is also in the current version 143-1.


### RHEL 6.8 (141-2)                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                    
[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1                                                                                                                                                                                                                 
  Physical volume "/dev/sda1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdb1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdc1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdd1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sde1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdf1" successfully created                                                                                                                                                                                                                  
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created
[root@host-116 ~]#  lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.
[root@host-116 ~]# lvs -a -o +devices
  LV              VG     Attr       LSize   Pool Origin Data%  Meta%  Cpy%Sync Devices
  raid            test   rwi-a-r--- 100.00m                           100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test   iwi-aor--- 100.00m                                    /dev/sda1(1)
  [raid_rimage_1] test   iwi-aor--- 100.00m                                    /dev/sdb1(1)
  [raid_rmeta_0]  test   ewi-aor---   4.00m                                    /dev/sda1(0)
  [raid_rmeta_1]  test   ewi-aor---   4.00m                                    /dev/sdb1(0)
[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor--- 100.00m                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_tdata_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)

[root@host-116 ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree  
  /dev/sda1  test   lvm2 a--  24.99g      0 
  /dev/sdb1  test   lvm2 a--  24.99g   8.00m
  /dev/sdc1  test   lvm2 a--  24.99g      0 
  /dev/sdd1  test   lvm2 a--  24.99g      0 
  /dev/sde1  test   lvm2 a--  24.99g      0 
  /dev/sdf1  test   lvm2 a--  24.99g      0 
  /dev/sdg1  test   lvm2 a--  24.99g 100.00m
  /dev/sdh1  test   lvm2 a--  24.99g 100.00m

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Unable to reduce RAID LV - operation not implemented.
  Error releasing logical volume "raid"








### RHEL 6.8 (140-3)

[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sdf1" successfully created
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created
[root@host-116 ~]# lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.
[root@host-116 ~]# lvs -a -o +devices
  LV              VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  raid            test   rwi-a-r--- 100.00m                          100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz-- 100.00m             0.00   0.88           raid_tdata(0)
  [raid_tdata]          test   rwi-aor--- 100.00m                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor--- 100.00m                                   /dev/sda1(1)
  [raid_tdata_rimage_1] test   iwi-aor--- 100.00m                                   /dev/sdb1(1)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.
[root@host-116 ~]# lvs -a -o +devices
  LV                    VG     Attr       LSize   Pool Origin Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]       test   ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test   twi-a-tz--  99.86g             0.00   10.64          raid_tdata(0)
  [raid_tdata]          test   rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test   iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test   iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test   ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test   ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test   ewi-ao----   4.00m                                   /dev/sda1(26)
[root@host-116 ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree  
  /dev/sda1  test   lvm2 a--  24.99g      0 
  /dev/sdb1  test   lvm2 a--  24.99g   8.00m
  /dev/sdc1  test   lvm2 a--  24.99g      0 
  /dev/sdd1  test   lvm2 a--  24.99g      0 
  /dev/sde1  test   lvm2 a--  24.99g      0 
  /dev/sdf1  test   lvm2 a--  24.99g      0 
  /dev/sdg1  test   lvm2 a--  24.99g 100.00m
  /dev/sdh1  test   lvm2 a--  24.99g 100.00m

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Logical volume "raid" successfully removed
Comment 3 Alasdair Kergon 2016-03-10 06:20:16 EST
(need -vvvv from the failing command)
Comment 4 Alasdair Kergon 2016-03-10 06:25:38 EST
Please attach the output from the failing:
  lvremove -vvvv test
Comment 7 Corey Marthaler 2016-03-10 12:18 EST
Created attachment 1134964 [details]
lvremove -vvvv
Comment 10 Zdenek Kabelac 2016-03-15 12:55:31 EDT
Has got  broken by this upstream commit:

https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=b64703401da1f4bef60579a0b3766c087fcfe96a


Working on bugfix.
Comment 11 Zdenek Kabelac 2016-03-15 18:34:55 EDT
And fixed by this commit upstream:

https://www.redhat.com/archives/lvm-devel/2016-March/msg00114.html

Somehow this case missed test suite.
Comment 13 Corey Marthaler 2016-03-16 16:58:48 EDT
Verified working again in the latest rpms.

2.6.32-616.el6.x86_64
lvm2-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
lvm2-libs-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
lvm2-cluster-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
udev-147-2.72.el6    BUILT: Tue Mar  1 06:14:05 CST 2016
device-mapper-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-libs-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-event-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-event-libs-1.02.117-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-2.el6    BUILT: Wed Mar 16 08:30:42 CDT 2016






[root@host-116 ~]# vgcreate test /dev/sd[abcdefgh]1        
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdd1" successfully created
  Physical volume "/dev/sde1" successfully created
  Physical volume "/dev/sdf1" successfully created
  Physical volume "/dev/sdg1" successfully created
  Physical volume "/dev/sdh1" successfully created
  Volume group "test" successfully created

[root@host-116 ~]# lvcreate  --type raid1  -m 1 -L 100M -n raid test
  Logical volume "raid" created.

[root@host-116 ~]# lvconvert --thinpool test/raid --yes
  WARNING: Converting logical volume test/raid to pool's data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raid to thin pool.

[root@host-116 ~]# lvextend -l100%FREE test/raid
  Extending 2 mirror images.
  Size of logical volume test/raid_tdata changed from 100.00 MiB (25 extents) to 99.86 GiB (25564 extents).
  Logical volume raid successfully resized.

[root@host-116 ~]# lvs -a -o +devices
  LV                    VG   Attr         LSize Pool Origin Data% Meta%  Cpy%Sync Devices
  [lvol0_pmspare]       test ewi-------   4.00m                                   /dev/sda1(27)
  raid                  test twi-a-tz--  99.86g             0.00  10.64           raid_tdata(0)
  [raid_tdata]          test rwi-aor---  99.86g                          100.00   raid_tdata_rimage_0(0),raid_tdata_rimage_1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sda1(1)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sda1(28)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sdc1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sde1(0)
  [raid_tdata_rimage_0] test iwi-aor---  99.86g                                   /dev/sdg1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdb1(1)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdd1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdf1(0)
  [raid_tdata_rimage_1] test iwi-aor---  99.86g                                   /dev/sdh1(0)
  [raid_tdata_rmeta_0]  test ewi-aor---   4.00m                                   /dev/sda1(0)
  [raid_tdata_rmeta_1]  test ewi-aor---   4.00m                                   /dev/sdb1(0)
  [raid_tmeta]          test ewi-ao----   4.00m                                   /dev/sda1(26)

[root@host-116 ~]# lvremove test
Do you really want to remove active logical volume raid? [y/n]: y
  Logical volume "raid" successfully removed
Comment 15 errata-xmlrpc 2016-05-10 21:20:53 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html

Note You need to log in before you can comment on or make changes to this bug.