Bug 1225580 - need to turn off ability to reduce size of thin pool meta device
Summary: need to turn off ability to reduce size of thin pool meta device
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.7
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-27 17:16 UTC by Corey Marthaler
Modified: 2016-05-11 01:17 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.140-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-11 01:17:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0964 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-05-10 22:57:40 UTC

Description Corey Marthaler 2015-05-27 17:16:40 UTC
Description of problem:
[root@host-109 ~]# lvs -a -o +devices
  LV              Attr       LSize   Pool   Origin Data%  Meta% Devices
  [lvol0_pmspare] ewi-------   4.00m                            /dev/sdf1(0)
  origin          Vwi-aotz--   1.00g resize        16.60
  other1          Vwi-a-tz--   1.00g resize        0.00
  other2          Vwi-a-tz--   1.00g resize        0.00
  other3          Vwi-a-tz--   1.00g resize        0.00
  other4          Vwi-a-tz--   1.00g resize        0.00
  other5          Vwi-a-tz--   1.00g resize        0.00
  resize          twi-aotz--   4.00g               4.17   1.95  resize_tdata(0)
  [resize_tdata]  Twi-ao----   4.00g                            /dev/sdf1(1)
  [resize_tmeta]  ewi-ao----   4.00m                            /dev/sdd1(0)
  snap1           Vwi-a-tz-k   1.00g resize origin 16.60

[root@host-109 ~]# lvreduce -L -200M snapper_thinp/resize
  Thin pool volumes cannot be reduced in size yet.
  Run `lvreduce --help' for more information.

[root@host-109 ~]# lvextend -L +200M snapper_thinp/resize_tmeta
  Size of logical volume snapper_thinp/resize_tmeta changed from 4.00 MiB (1 extents) to 204.00 MiB (51 extents).
  Logical volume resize_tmeta successfully resized

[root@host-109 ~]# lvreduce -L -200M snapper_thinp/resize_tmeta
  WARNING: Reducing active and open logical volume to 4.00 MiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce resize_tmeta? [y/n]: y
  Size of logical volume snapper_thinp/resize_tmeta changed from 204.00 MiB (51 extents) to 4.00 MiB (1 extents).
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Problem reactivating logical volume snapper_thinp/origin.
  Releasing activation in critical section.
  libdevmapper exiting with 2 device(s) still suspended.

May 27 11:58:58 host-109 kernel: device-mapper: thin: 253:4: metadata device (1024 blocks) too small: expected 52224
May 27 11:58:58 host-109 kernel: device-mapper: table: 253:4: thin-pool: preresume failed, error = -22

[DEADLOCK]

# After reboot
[root@host-109 ~]# lvremove snapper_thinp
Removing pool "resize" will remove 6 dependent volume(s). Proceed? [y/n]: y
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Failed to update pool snapper_thinp/resize.


Version-Release number of selected component (if applicable):
2.6.32-562.el6.x86_64

lvm2-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-libs-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
lvm2-cluster-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
udev-147-2.62.el6    BUILT: Thu Apr 23 05:44:37 CDT 2015
device-mapper-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-event-libs-1.02.95-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.118-2.el6    BUILT: Wed Apr 15 06:34:08 CDT 2015

Comment 2 Corey Marthaler 2015-05-27 18:54:43 UTC
FWIW, cache pools appear to be locked down properly.

# cache pool only
[root@host-109 ~]# lvs -a -o +devices
  LV                    Attr       LSize   Pool Origin Data% Meta% Cpy%Sync Devices
  corigin               -wi-a-----   4.00g                                  /dev/sda1(0)
  display_cache         Cwi---C---   2.00g                                  display_cache_cdata(0)
  [display_cache_cdata] Cwi-------   2.00g                                  /dev/sdf1(0)
  [display_cache_cmeta] ewi-------  12.00m                                  /dev/sdf1(512)
  [lvol0_pmspare]       ewi-------  12.00m                                  /dev/sdf1(515)

[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache
  Unable to resize logical volumes of cache type.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache_cdata
  Can't resize internal logical volume display_cache_cdata
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache_cmeta
  Can't resize internal logical volume display_cache_cmeta
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/lvol0_pmspare
  Can't resize internal logical volume lvol0_pmspare
  Run `lvreduce --help' for more information.


# cache volume with cache pool
[root@host-109 ~]# lvs -a -o +devices
  LV                    Attr       LSize   Pool            Origin          Data% Meta% Cpy%Sync Devices
  corigin               Cwi-a-C---   4.00g [display_cache] [corigin_corig] 0.01  4.39  0.00     corigin_corig(0)
  [corigin_corig]       owi-aoC---   4.00g                                                      /dev/sdc1(0)
  [display_cache]       Cwi---C---   2.00g                                 0.01  4.39  0.00     display_cache_cdata(0)
  [display_cache_cdata] Cwi-ao----   2.00g                                                      /dev/sdb1(0)
  [display_cache_cmeta] ewi-ao----  12.00m                                                      /dev/sdb1(512)
  [lvol0_pmspare]       ewi-------  12.00m                                                      /dev/sdf1(0)

[root@host-109 ~]# lvreduce -L -200M cache_sanity/corigin
  Unable to resize logical volumes of cache type.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/corigin_corig
  Can't resize internal logical volume corigin_corig
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache
  Can't resize internal logical volume display_cache
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache_cdata
  Can't resize internal logical volume display_cache_cdata
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/display_cache_cmeta
  Can't resize internal logical volume display_cache_cmeta
  Run `lvreduce --help' for more information.
[root@host-109 ~]# lvreduce -L -200M cache_sanity/lvol0_pmspare
  Can't resize internal logical volume lvol0_pmspare
  Run `lvreduce --help' for more information.

Comment 3 Zdenek Kabelac 2015-05-28 07:46:40 UTC
In some future version there will be support at least for reduction of metadata of inactive thin pool volume  (essentially doing thin_restore into a smaller LV).

But ATM it's clear fault of tool.

Comment 4 Zdenek Kabelac 2015-08-24 09:33:37 UTC
Switched off by upstream commit:

https://www.redhat.com/archives/lvm-devel/2015-August/msg00166.html

Comment 8 Corey Marthaler 2016-02-18 20:10:01 UTC
Fix verified in the latest rpms.

### 6.7
[root@host-117 ~]# lvs -a -o +devices
  LV              Attr       LSize   Pool   Origin Data%  Meta% Devices
  [lvol0_pmspare] ewi-------   4.00m                            /dev/sda1(0)
  origin          Vwi-a-t---   1.00g resize        31.84
  other1          Vwi-a-t---   1.00g resize        0.01
  other2          Vwi-a-t---   1.00g resize        0.01
  other3          Vwi-a-t---   1.00g resize        0.01
  other4          Vwi-a-t---   1.00g resize        0.01
  other5          Vwi-a-t---   1.00g resize        0.01
  resize          twi-aot---   5.00g               6.53   0.28  resize_tdata(0)
  [resize_tdata]  Twi-ao----   5.00g                            /dev/sda1(1)
  [resize_tmeta]  ewi-ao---- 104.00m                            /dev/sdf1(0)
  snap1           Vwi-a-t---   1.00g resize origin 16.46
  snap2           Vwi-a-t---   1.00g resize origin 31.84

[root@host-117 ~]# lvreduce -L -2G snapper_thinp/resize
  Thin pool volumes cannot be reduced in size yet.
  Run `lvreduce --help' for more information.
[root@host-117 ~]# lvreduce -f -L -100M snapper_thinp/resize_tmeta
  WARNING: Reducing active and open logical volume to 4.00 MiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
  Size of logical volume snapper_thinp/resize_tmeta changed from 104.00 MiB (26 extents) to 4.00 MiB (1 extents).
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume snapper_thinp-resize-tpool (253:4)
  Problem reactivating logical volume snapper_thinp/origin.
  Releasing activation in critical section.
  libdevmapper exiting with 2 device(s) still suspended.



### 6.8
[root@host-137 ~]# lvs -a -o +devices
  LV              Attr       LSize   Pool   Origin Data%  Meta% Devices
  [lvol0_pmspare] ewi-------   4.00m                            /dev/sdd1(0)
  origin          Vwi-a-t---   1.00g resize        32.25
  other1          Vwi-a-t---   1.00g resize        0.01
  other2          Vwi-a-t---   1.00g resize        0.01
  other3          Vwi-a-t---   1.00g resize        0.01
  other4          Vwi-a-t---   1.00g resize        0.01
  other5          Vwi-a-t---   1.00g resize        0.01
  resize          twi-aot---   5.00g               6.62   0.30  resize_tdata(0)
  [resize_tdata]  Twi-ao----   5.00g                            /dev/sdd1(1)
  [resize_tmeta]  ewi-ao---- 104.00m                            /dev/sdb1(0)
  snap1           Vwi-a-t---   1.00g resize origin 17.16
  snap2           Vwi-a-t---   1.00g resize origin 32.25

[root@host-137 ~]# lvreduce -L -2G snapper_thinp/resize
  Thin pool volumes cannot be reduced in size yet.
  Run `lvreduce --help' for more information.
[root@host-137 ~]# lvreduce -f -L -100M snapper_thinp/resize_tmeta
  Thin pool metadata volumes cannot be reduced.
  Run `lvreduce --help' for more information.



2.6.32-615.el6.x86_64
lvm2-2.02.141-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
lvm2-libs-2.02.141-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
lvm2-cluster-2.02.141-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
udev-147-2.71.el6    BUILT: Wed Feb 10 07:07:17 CST 2016
device-mapper-1.02.115-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
device-mapper-libs-1.02.115-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
device-mapper-event-1.02.115-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
device-mapper-event-libs-1.02.115-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc1.el6    BUILT: Wed Feb 10 09:52:15 CST 2016
cmirror-2.02.141-2.el6    BUILT: Wed Feb 10 07:49:03 CST 2016

Comment 10 errata-xmlrpc 2016-05-11 01:17:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html


Note You need to log in before you can comment on or make changes to this bug.